When The Model Models Itself

Guest Post by Willis Eschenbach

Eric Worrell posted an interesting article wherein a climate “scientist” says that falsifiability is not an integral part of science … now that’s bizarre madness to me, but here’s what she says:

It turns out that my work now as a climate scientist doesn’t quite gel with the way we typically talk about science and how science works.

1. Methods aren’t always necessarily falsifiable

Falsifiability is the idea that an assertion can be shown to be false by an experiment or an observation, and is critical to distinctions between “true science” and “pseudoscience”.

Climate models are important and complex tools for understanding the climate system. Are climate models falsifiable? Are they science? A test of falsifiability requires a model test or climate observation that shows global warming caused by increased human-produced greenhouse gases is untrue. It is difficult to propose a test of climate models in advance that is falsifiable.

Science is complicated – and doesn’t always fit the simplified version we learn as children.

This difficulty doesn’t mean that climate models or climate science are invalid or untrustworthy. Climate models are carefully developed and evaluated based on their ability to accurately reproduce observed climate trends and processes. This is why climatologists have confidence in them as scientific tools, not because of ideas around falsifiability.

For some time now, I’ve said that a computer model is merely a solid incarnation of the beliefs, theories, and misconceptions of the programmers. However, there is a lovely new paper called The Effect of Fossil Fuel Emissions on Sea Level Rise: An Exploratory Study in which I found a curious statement. The paper deserves reading on its own merits, but there was one sentence in it which struck me as a natural extension of what I have been saying, but one which I’d never considered.

galveston split half test

The author, Jamal Munshi, who it turns out works at my alma mater about 45 minutes from where I live, first described the findings of other scientists regarding sea level acceleration. He then says:

This work is a critical evaluation of these findings. Three weaknesses in this line of empirical research are noted.

First, the use of climate models interferes with the validity of the empirical test because models are an expression of theory and their use compromises the independence of the empirical test of theory from the theory itself.

Secondly, correlations between cumulative SLR and cumulative emissions do not serve as empirical evidence because correlations between cumulative values of time series data are spurious (Munshi, 2017).

And third, the usually held belief that acceleration in SLR, in and of itself, serves as evidence of its anthropogenic cause is a form of circular reasoning because it assumes that acceleration is unnatural.

Now, each of these is indeed a devastating critique of the state of the science regarding sea level acceleration. However, I was particularly struck by the first one, viz:

… the use of climate models interferes with the validity of the empirical test because models are an expression of theory and their use compromises the independence of the empirical test of theory from the theory itself.

Indeed. The models are an expression of the theory that CO2 causes warming. As a result, they are less than useful in testing that same theory.

Now, the scientist quoted by Eric Worrell above says that scientists believe the models because they “accurately reproduce climate trends and processes”. However, I see very little evidence of that. In the event, they have wildly overestimated the changes in temperature since the start of this century. Yes, they can reproduce the historical record, if you squint at it in the dusk with the light behind it … but that’s because they’ve been evolutionarily trained to do that—the ones that couldn’t reproduce the past died on the cutting room floor. However, for anything else, like say rainfall and temperature at various locations, they perform very poorly.

Finally, I’ve shown that the modeled global temperature output can be emulated to a very high degree of accuracy by a simple lagging and rescaling of the inputs … despite their complexity, their output is a simple function of their input.

So … since:

•  we can’t trust the models because their predictions suck, and

•  we can emulate their temperature output with a simple function of their input forcing, and

•  they are an expression of the CO2 theory so they are less than useful in testing that theory …

… then … just what is it that are they good for?

Yes, I’m aware that all models are wrong, but some models are useful … however, are climate models useful? And if so, just what are these models useful for?

I’ll leave it there for y’all to take forwards. I’m reluctant to say anything further, ’cause I know that every word I write increases the odds that some charming fellow like 1sky1 or Mosh will come along to tell me in very unpleasant terms that I’m doing it wrong because I’m so dumb, and then they will flat-out refuse to demonstrate how to do it right.

Most days that’s not a problem, but it’s after midnight here, the stars are out, and my blood pressure is just fine, so I’ll let someone else have that fun …

My regards to everyone, commenters and lurkers, even 1sky1 and Mosh, I wish you all only the best,

w.

My Usual Request: Misunderstandings start easily and can last forever. I politely request that commenters QUOTE THE EXACT WORDS YOU DISAGREE WITH, so we can all understand your objection.

My Second Request: Please do not stop after merely claiming I’m using the wrong dataset or the wrong method. I may well be wrong, but such observations are not meaningful until you add a link to the proper dataset or an explanation of the right method.

Advertisements

202 thoughts on “When The Model Models Itself

  1. John von Neumann famously said:
    «With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.»

    By this, he meant that one should not be impressed when a complex model fits a data set well. With enough parameters, you can fit any data set. It turns out you can literally fit an elephant with four parameters if you allow the parameters to be complex numbers.

    Drawing an elephant with four complex parameters by Jurgen Mayer, Khaled Khairy, and Jonathon Howard,  Am. J. Phys. 78, 648 (2010), DOI:10.1119/1.3254017.”

    • That’ looks a lot more like a Mastodon than an Elephant. But it seems a lot closer to being an Elephant than climate models do to being credible predictors.

      And it is undeniably cute.

    • “If you put tomfoolery into a computer, nothing comes out of it but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.”
      – Pierre Gallois

      • Mathematics may be compared to a mill of exquisite workmanship, which grinds you stuff of any degree of fineness; but, nevertheless, what you get out depends upon what you put in; and as the grandest mill in the world will not extract wheat-flour from peascod, so pages of formulae will not get a definite result out of loose data. – T. H. Huxley

  2. There has been a noticeable increase in papers which claim that using a model that produces the result they modelled for proves what they were modelling for is right.

    I think the authors actually believe this lunacy, partly because they start if with such a strong belief in the result they want, partly because they have been conditioned to not actually think and partly because they do not understand models.

    We seem to have reached Peak Lunacy in the AGW world, where the “science” that us being churned out is beyond parody and beyond reason. At the same time, the output of “real” climate science is correspondingly low – has there been any important advances in 5-8 years?

    I hope this means we are on the cusp of it all collapsing.

    • I think the authors actually believe this lunacy, partly because they start if with such a strong belief in the result they want, partly because they have been conditioned to not actually think and partly because they do not understand models.

      And partly because they don’t read skeptical critiques, only their side’s misrepresentations of them.

    • It is also indicative of novice or inexperienced programmers.

      Folks start out certain their code is perfect and that they are right. It takes sbout 5 years of hair pulling, midnight phone calls, crashes, and bug reports / bug stomping to drive that out of new programmers and let them reach the curmudgeon defensive programmer style that has few bugs.

      At the 10 year point or so they even start to suspect the regression tests snd QA suite of being wrong…

      (Me? Programmer for about 45 years, ran QA department for a compiler tool chain. Ported GIStemp to Linux in my office. Built a personal Beowulf cluster for fun. Currently running a 3 node, 12 core Raspberry Pi cluster with distcc for distributed compiles, and playing with two climate models – though at low priority. Not at all impressed with the climate models, and GIStemp is flat out junk. And yes, I now write VERY paranoid and VERY careful code and still don’t trust it. Why? Compiler bugs. I’ve actually had code that said, basically, IF A do foo, iIF NOT A do bar, ELSE Print “you can’t get here”: when run, print out “you can’t get here”… So my code is often testing for impossible cases… the climate codes I’ve seen look to be barely tested, often changed, snd lacking a regression suite. A formula for rampant failure.)

      • Most code isn’t even written to be testable. I have come to believe that testing must begin at the requirements phase, long before code is even written. Otherwise it’s just an exercise in trying to find all the bugs and miss-specifications after the main development team has moved onto something else.

        Of course, the GCMs have no formal requirements or tests of any kind, so they are just lab toys and not suitable for basing public policy on.

      • Yep! Many years ago, I found that a brand new, shiny, FORTRAN 77 compiler for an ICL mainframe introduced bugs when you turned off debug mode. Took a lot of going thrrough hex dumps to find out it was the COMPILER, not me!
        This sorta leaves a programmer in an almost permanent paranoid mode.

  3. Dear Willis.

    See my long post in the previous article.

    Climate science is a metaphysical construct reflecting a deep set sense of fear and guilt over mankind’s presumed power to affect Nature, and his terror at the power he thinks he has being misapplied enough to terminate his existence.

    Used by people who don’t care anyway to further their political and commercial ends.

    A guy called Cnut tried to remedy this years ago, but just turned into an anagram instead.

    • Do young people today even know about King Canute?

      Great wisdom is embedded in Western Culture. That doesn’t suit the postmodern feminists who run academia because it was all created by old dead white men. These idiot scumbag Marxists have missed the biggest lesson passed on to us by the ancient Greeks, and then Canute, which is about the dangers of hubris.

      • Many youngsters have heard the story of King Canute, but they heard it told by Marxist indoctrinated teachers, so instead of being a story where the wise king shows the folly of human power over nature to his misguided court, it is transmogrified into a story about the folly of kings with little implication for the impact on society.

      • It’s funny Canute didn’t show his court a model of the tide turning back and claim even greater God given powers. Especially to tax to prevent the tides overtaking the land.

      • The story of Canute was presented as a story about the folly of kings long before there were any “Marxist indoctrinated teachers”.

    • It is not generally known that a law passed during his rein was the basis for the American Revolt’s claim that Parliament did not have the authority to tax them. Proof will be available in my book soon to be published.

  4. I can see climate models as useful tests of collecting together what people THINK they know about climate and letting it run for 10 years. Repeat…..

    What I don’t like about the climate model crowd now is that they purposely misrepresent what the models even show. If ONE of your runs was close in 2016, another one was close in 2015, another one in 2014, that doesn’t mean they were based on a solid understanding. They could have just ended up there by coincidence because there are only so many options that the end result can be.

    Until individual runs can start getting 8 out of 10 years right, they are not to be relied on for policy. Looking at dozens in a suite and saying they each individually got a certain year right is total garbage.

    • Not one has gotten even one year “right”. They all miss MAJOR elements of the observed climate system; they just sometimes come closer on temperature than at other times.

    • So if they “adjust” the input parameters to get a correct projection, does that mean the model is correct? What if they have one input too high a forcing and another too low? Same result! What if they have a hundred too low and 1 big one too low? Same result! What if they don’t like the results and start to “adjust” the inputs that seem to be “holding it back” and then convincing themselves that they’ve stumbled onto the magic formula that “explains” climate and generates new funding? Well! Now they’re cooking with gas!

    • John,
      You said, “… models as useful tests of collecting together what people THINK they know about climate…” Indeed, that should be their only function, to test what we think we know about climate. Unfortunately, climatologists seem to be learning little, and arrogantly claim that their results are reliable and useful. In other words, climate modelers think they know everything!

    • No it must both get at least nine out of ten years right and be able to show the reason the other was not right.
      And there must be no need for any adjustment of the data. Engineers could get data on climate accurately enough to not need “adjustment” so if climate scientists cannot do this then they need to call in professionals to do the job. This is low grade commercial quality requirements not the requirement for a life critical function the climate scientists claim climate change to be.
      Any self respecting engineer would know the need for a proper reference network for at least a dozen of the sites for calibration and for regular inspection of the sites for compliance with the specification for measuring sites.
      If reference sites and calibration ones do not match then that is the uncertainty so any deviation in that order cannot be used as evidence in the case.

  5. I would dispute the claim “their predictions suck”. To begin with they get the average temperature of the
    Earth correct to within 0.2K which is an error of less than 0.1%. So my question would be are there any other models of the Earth’s climate that are as accurate? And secondly if an error of 0.1% counts as “sucking” what would you consider to be good?

    • Thanks, Germonio. As I said, they are tuned to the historical temperature, so that is not a valid test. When they are tested on what they are not tuned on, they do very poorly.

      In addition, they don’t agree with each other to within 0.2K. Consider:

      w.

      • Yes. That’s a classic sign of overfitting: The model is so fine-tuned to existing observations that they output too much of the noise in the observations; when provided a fresh batch of observations (inherently laden with signal and noise), the model performs poorly.

        The model is an expression of the theory, and to the extent that the model generated testable predictions is the extent to which it is scientific. Importantly, to the extent that the model parameters can simply be tweaked to fit existing data is the extent to which the model is circular and, therefore, pseudoscientific: you are tweaking the parameters to fit existing observations because you just don’t have a good explanation of the parameters in the first place. I would say that sometimes this is a necessary evil, but it absolutely must be repeatedly acknowledged to be so by the model’s creators, purveyors, and consumers. And this last part is just not happening at all it seems.

    • Germonion: The Temp of the entire Earth is not know, nor is it knowable, especially down to a fraction of a degree.

    • How can you be so sure “the temperature of the earth someone tells is exact within 0.2K degrees plus minus 0.1%” Perhaps the temperature of the earth is exactly 0.2 K plus-minus 5K. I think we should consult the book of Daniel to figure put what is exactly the temperature of the earth in some concrete year.

    • Interesting. Much of the historical temperatures the models are backcasting were taken to nearest degree. So that’s +- 0.5 degree. So how can a model get results that are more accurate than the input data?

    • What a smart trick, using degrees K so that model errors come out to a very small percentage:- just 0.1%. Wow, that’s incfedibly accurate, we’re all super impressed. But there’s one teensy weensy little problem …
      … the change in CO2 concentration that supposedly caused this change is around 0.01% of Earth’s atmosphere. Far from being teensy weensy, a 0.1% error somehow looks absolutely gross against an atmosphere changing by 0.01%.

      All of which demonstrates that rating results based on their percentage of something else is mathematical madness.

      Fact is, we aren’t even looking in the right place for global warming. We should be looking for total heat content of ocean plus atmosphere (the parts of Earth that relate to climate), not just in the atmosphere. There are various rather obviuos reasons for this, one of which is that when our spurious atmospheric temperature increases because of an El Nino, that isn’t a warming, it’s a cooling (an El Nino is, you might say, one of Earth’s ways of releasing energy to space).

      • Using Kelvin, the total warming since the end of the little ice age is only about 0.2%.
        IE, nothing to worry about.

    • G, your assertion is not true for three reasons.
      Essay Models all the way down shows that in absolute rather than in anomaly terms the CMIP5 models disagree with each other by +/- 3C.
      In the tropical troposphere they run hot by ~2x (Santer with incorrect stratosphere correction) to ~3.5x (Christy).
      They failed to reproduce the Pause.

      • The difference between the models is almost an order of magnitude larger than the total warming (from all causes) since the end of the little ice age.

    • Weather station measurements are typically good to +/- 0.5 K. The measurements are then “corrected” and “homogenized” through a series of steps which, one by one, make them further and further from actual data. By the time they are all “averaged” to produce a number that has no actual meaning, they no longer represent anything that was actually measured. Models that match that number are tuned to do so, and represent extremely expensive, well-disguised curve fitting of meaningless data.

    • Germonio,
      If you want to work in the Kelvin world, a predicted increase of 3 deg C over the next century is only an increase of about 0.1% per decade. Why should anyone get particularly concerned about that?

    • Germonio, the problem is that over 99% of the range is meaningless. You don’t go based off the absolute error. You go based on the relevant temperature range. Since the average changes less than 1C year to year, that means your error is 20% of your maximum range.

      I hope you are being facetious. I just can’t tell anymore.

  6. This one is a real beauty:
    “When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori . The bias correction or adjustment linearly corrects for model drift.”
    (Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; Page 967)

  7. If scientists working on a stock-market model were able to tuned it to match the market results of the past ten year, I still wouldn’t trust it with my money to accurately forecast next year’s market. Would you?

  8. Totally.
    And on so many levels.
    1. The Computer is pretty well the/today’s ultimate appeal to authority. So, tell someone to do something because ‘the computer said so’ – what are they gonna do?
    Its like throwing up of endless links & references. It makes impossible work for the fellow arguing against you.
    There is nothing currently better for throwing up chaff and noise than computers.

    2. How do you know the computer output is correct if not because it is what you expect it to be?
    (You’re seeing your own reflection as I’ve said numerous times)

    3. Mosher tell us us “computers are tools”
    Yes, he’s absolutely right about a computer sitting there doing nothing. But give it to someone to actually use and it becomes like the big sharp knife in your kitchen drawer- it can equally be used for slicing cheese as it can as a murder weapon. Its the intent of the user – how do you prove or disprove that?
    Certainly meeting the intended user face-to-face but, surprise surprise, computers are removing the need for that. At least according to the computer users.
    Fantastic positive feedback. Again, because what drives the posited GHGE if not positive water-vapour feedback?

    I think therein is THE major problem – computers and the belief of their infallibility.

    • “3. Mosher tell us us “computers are tools””

      Perhaps he had this alternative definition in mind:

      “tool:
      One who lacks the mental capacity to know he is being used. A fool. A cretin. Characterized by low intelligence and/or self-steem.”

    • Computers are often merely a substitute for thought. Rather than thinking through a problem ab initio, you develop a model (which of course merely expresses your assumptions and biases), then you throw numbers at it and announce the results as infallible, not to be questioned, because the computer says so.

      All that a model tells you is that, if the world behaved in the very simplistic manner which your software models, then the future would be so-and-so. However, until you can demonstrate that your model faithfully includes every significant real-world factor, your model isn’t worth the paper on which the results are printed out.

      Since current climate models omit vast swaths of significant real-world factors, such as cloud cover, and don’t have anything like the spatial resolution to model real-world effects, anyone who believes in their results is a fool.

      • For models of complex and/or chaotic systems it doesn’t even require that the model be simplistic to be impressively wrong. Even the smallest item of input not quite right creates nonsense in pretty short order. This reflects what is known about climate and the much greater amount that is not understood.

      • All scientific theories are models, usually systems of equations. The problem with computerized climate models is simply that they have never been validated. Of course validation would take 30 years (climate has previously been defined at WUWT as the weather averaged over 30 years).

  9. Science is complicated – and doesn’t always fit the simplified version we learn as children.

    Some things are actually simple. Liars and frauds try to convince us otherwise using my ‘favorite’ technique, BS baffles brains.

    The tremendous technological advances of western society are due to the simple scientific method. It replaced the previous, in the west, reliance on ancient Greek experts. The epitome of the old approach is the Summa Theologica. In it, Saint Thomas Aquinas uses logic and appeal to experts to prove everything. If you want to debate how many angels can dance on the head of a pin your starting point should be the Summa.

    Dr. Sophie Lewis is a real scientist and a real expert. That does not make her reliable and trustworthy. Experts are extremely fallible because of their overconfidence. It’s this overconfidence that lends Dr. Lewis her frisson of arrogance and self delusion. It reminds me of:

    I do not like thee, Doctor Fell,
    The reason why – I cannot tell;
    But this I know, and know full well,
    I do not like thee, Doctor Fell link

    • CommieBob,
      Apparently Sophie has a PhD and an academic position with the putative title of scientist. However, I question whether she really has the mindset and thinking pattern of a scientist. She seems not to have a firm grasp of the Scientific Method and thinks that model results, which she personally approves of, is sufficient reason to endorse as being valid. She is living in a subjective, fantasy world where she thinks her education and job title are sufficient to suppress objections and obviate the need for further research.

  10. Is this guy Willis a *snip*, or what?

    * I just snipped this rather than trashing the whole comment because you are new here(?). Miss Swiss, we do not allow this type of post. Your posts that contain arguments with the substance of the article are welcome, personal attacks or gratuitous insults of the author or other posters are not.

    Thanks – Mod

  11. “Indeed. The models are an expression of the theory that CO2 causes warming. As a result, they are less than useful in testing that same theory.”

    Except that is not the theory.

    The Theory is that the climate of the planet is the result of

    ALL external forcings
    AND
    Internal variability.

    So if you build a model of the planet and you only include solar forcing and say volcanoes… Guess what?
    Your climate model will really suck…

    So you add Methane
    it gets better
    you add HFCs
    it gets better
    you add C02

    And Dang, you can model this very complex thing and get answers that correct to within small percentages.

    Now the climate model doesnt represent the whole theory. The same way CDF code doesnt represent and cant represent the flows of fluids. Further Nobody believes in the theory ( climate results from external forcing and internal variability) BECAUSE of the model. And finally if models were just crazy wrong, if they showed cooling from the addition of c02, we would know the models were wrong. because they violate what we have known since 1896. In short, models add nothing to the foundation of our knowledge. and if they fail, that says nothing about the theory. it rather means, the model is wrong, not the theory that it was struggling to represent.

    In short. Destroy every model ever constructed. we still know what we knew in 1896 ( c02 caises warming, like ALL GHGs ) and we still know what steam engineer callandar knew: GHGs cause warming

    • I don’t think many of us here deny the physics of GHG warming, reduced to its simplest equation. What’s at issue is near-impossibility of modeling the Earth’s fantastically complex, chaotically-coupled heat flows. Especially the feedbacks which can counteract or even reverse warming, many of which are poorly understood and hence poorly modeled.

      • “You can model this very complex thing and get answers that correct to within small percentages.” As radiation is central to your argument, you have to use absolute temperatures. They are around 300 K, a small percentage – let’s say 2% – is 6 degrees K = 6 degrees C = 10 degrees F.

      • Mosher “Except that is not the theory.”

        You haven’t gotten to the Theory Stage, you are barely in the Hypotheses Stage. Write back when you have the “in-depth explanation of the observed phenomenon”.

        “Hypotheses, theories and laws are rather like apples, oranges and kumquats: one cannot grow into another, no matter how much fertilizer and water are offered,” according to the University of California. A hypothesis is a limited explanation of a phenomenon; a scientific theory is an in-depth explanation of the observed phenomenon. A law is a statement about an observed phenomenon or a unifying concept,
        https://www.livescience.com/21457-what-is-a-law-in-science-definition-of-scientific-law.html

    • If this was about CO2, there wouldn’t be supercomputers crunching the numbers – it could be done on the back of a packet of Craven “A”. Instead it is the rather more inscrutable feedbacks. Feedbacks as modelled claim calamity; this means that civilization has to be destroyed in order to be saved.

    • Steven Mosher: “And Dang, you can model this very complex thing and get answers that correct to within small percentages.”

      By introducing Celestial Spheres, planetary orbits could be modeled to ‘within small percentages’ with Earth as the center of the solar system.

      • This is core issue. A model cannot verify a theory, because there are many (perhaps infinite) models that could produce results close enough to the observed (and highly abstracted and/or AVERAGED) behavior of any system.

        And that is why the falsifiability of any theory is so important.

        And that is what modes are very useful for: Falsifiability. (i.e. in ability to predict the abstracted and averaged behavior, with measured/known changes of their supposed “exogenous” variables).

        And the GHG driven climate models have done an excellent job of that.

        That should have ended this debate years ago. And climate scientist should instead be looking hard at the many other variables and missing dynamics (i.e., relationships) that they have failed to understand or put into their model, or even consider important. Fortunately a few are (I think).

      • You get answers that are meaningless! The answer doesn’t tell you which parameters are wrong, in which direction they are wrong or by how much they are wrong. They don’t tell you which factors you “forgot” to include or are utterly unaware of. They don’t tell you how the various factors interact. So even if you get a “correct” answer it means absolutely nothing! These models aren’t even used to get or test information.
        They are used to put a scientific stamp of authority on a computer game for political purposes!

    • “And Dang, you can model this very complex thing and get answers that correct to within small percentages.”

      Because the temp of this planet only changes in small percentages…

      Oddly enough….if the models were tuned to past raw data….their linear predictions would be more accurate
      ….which, if anything, shows that adjustments to past temps, cooling the past……is fake data

    • Steven Mosher August 11, 2017 at 2:04 am

      … we still know what steam engineer callandar knew: GHGs cause warming

      I’ll give you that. What he told us would result in a completely non-alarming warming of maybe a degree and a half warming per doubling of CO2.

      CAGW happens only if there is positive feedback. Even if we totally ignore natural variability and attribute all the warming in the last century and a half to CO2, the evidence is that the net feedbacks are negative. link

      • “I’ll give you that.”

        I’ll take your gift back. We know only that it has a potential to cause warming. Negative feedbacks can attenuate, or even completely squash that warming potential.

      • Yes. What about chaos? Very good point, one that not only cAGW advocates sweep under the rug, but also sceptics? Why would that be?

        The ground truth is that chaotic systems can’t be modeled predicatively. Why you might ask? That was the question that drove Edward Lorenz to develop chaos theory back in the 1950’s. You knew Ed was a meteorologist already I suppose and you posed a rhetorical question, but it deserves discussion.

        What about chaos?

        As a species we lack the mathematical skill to model chaotic systems. Quantum theory suggests we won’t ever be able to. The Navier-Stokes problem demonstrates we certainly can’t now, using existing mathematics and computational methods, which aren’t to be confused with computational abilities; faster computers won’t solve this problem.

        So, what about chaos? And why are we having this silly debate?

    • Yeah, but what we “knew” in 1896 changed in 1906 when Arrhenious admitted that he’d overestimated the impact of a doubling of CO2 by 250-300%.

      • Please provide parameters for Warming potential AND negative feedbacks for water vapour. Not theoretical or modelled, tested and verified.

    • When you say that the models get better as you include more GHG components, you must be referring to the way they give a fair correspondence between temperature anomalies and the models during the 1975-2000 warming period. That is often cited as a reason for confidence in the models. But the models simply do not track the data well during the 1915-1945 period when the earth warmed at a similar rate. GHG concentrations were too low to have much of an effect on either reality or models in that period, which leaves natural variability as the likely cause of that warming — and a strong suspicion that it was also the cause of the 1775-2000 warming.

      • “…the models get better as you include more GHG components…”

        Funny, the more terms I include in a polynomial or Fourier series representation, the better I can fit the data, too.

        That’s what it comes down to. It is the same process. Curve fitting. And, the more complete your basis functions, the better you can make the expansion fit.

        Far from being evidence in favor of the models, it mere tautology.

      • @Bartemis:

        But no one ever publishes a model based on my personal favorite independent variable; historical pork belly prices on the Chicago Exchange.

        I’ve advanced this factor for consideration many times in many different forums and venues. No one has ever included it in study. I feel bad and I need to find a safe space. I think I’m being bullied.

        I need a support group. Is there a support group for middle aged white male statisticians?

    • Mosher ==> The actual questions that need to be answered in today’s world are:

      “Given that GHGs cause warming (that’s why we call them GHGs…), do more GHGs cause more warming? and if so, how much more of which GHGs cause how much more warming? and, is that warming a positive outcome? a negative outcome? a mixed outcome? “

      Supplying trivial answers to trivial questions [“GHGs cause warming”] is not climate science — it hardly even scores as advocacy or politics.

      • “…do more GHGs cause more warming?”

        YES! That is THE question. This is a dynamic system. It is not required that it respond the same in all states. We want the incremental sensitivity. It is quite possible to have a GHE that works up to fundamental limits, and then peters out beyond them.

      • Bart ==> It is not only a dynamic system, it is composed of (at least two) coupled non-linear dynamic systems — many bets are therefore way off.

    • Mosher,
      You said, “…,if they showed cooling from the addition of c02, [sic] we would know the models were wrong.” So, you feel that they would have to be completely ‘bassackwards’ before they should be invalidated? What about a quantitative difference that makes them unsuitable for the purpose of forecasting? The current models MAY have the trend right, but if the magnitude is wrong, then they aren’t really useful for long-range forecasting, which is what they are being abused for (double entendre intended). You should ask yourself just what is the purpose of all the money spent on climate models and whether that purpose has been achieved. David Middleton’s graphs suggest not!

    • @Mosher – I don’t understand how you can get even vaguely accurate results without including cloud cover, water vapour and the effects of tropical thunderstorms amongst other factors ?

      And another question – why do you describe the results as accurate when no climate model has been accurate in predicting temperatures ?

      I simply don’t understand your comment “if they showed cooling from the addition of c02, we would know the models were wrong.” – how could they possibly do that when they are programmed to treat CO2 as warming ?

      • Mr. Mosher
        “if they showed cooling from the addition of c02, we would know the models were wrong.”
        Completely wrong conclusion.
        If the MODELS showed cooling from the addition of CO2 where the THEORY predicts warming, then we would know that the THEORY is wrong, especially when the temperature DATA trends do not follow the CO2 data trend.

      • Old England asks: “why do you describe the results as accurate when no climate model has been accurate in predicting temperatures ?”

        I don’t really know why Steven does this, you’re right that it makes no sense and anyone who can read a graph can see the models are just so wrong there’s no doubt of it, but he and many others persist in saying they’re right anyway.

        It’s as if they think we’ll all just suspend disbelief and agree with them. Maybe if there’s enough of them saying it, and they say it long enough, we’ll all just abandon logic and agree?

        I think that’s really the strategy they’re depending on. As I recall it was one successfully used by the German National Socialists back in the 30’s. Some guy named Herman I think? Could be wrong about that, but I’m pretty sure it’s a famous kind of propaganda.

      • Old England: Then there’s always the more pedestrian “I Want To Believe” axiom promoted by Fox Mulder on the TV series “The X-Files”. That might also explain Steve.

    • Steve Mosher writes: “And Dang, you can model this very complex thing and get answers that correct to within small percentages.”

      We see this claim repeatedly endlessly Steven, over and over the claim is made that “the models are correct to within small percentages”, but it flat out isn’t true. Repeating the lie works for the addled, but it doesn’t work for anyone with a working brain.

      Essentially, it’s just another version of the false “appeal to authority”; the model results have been published. They’ve been published by an authority. Pay no attention to the fact they’re demonstrably wrong, we think they’re right and we will brook no argument!

      It’s no way to win a scientific debate Steve. Falls right on its face. It’s right up there with “97%”. It’s crapolla. Pure nonsense.

      You show me a model that actually predicts climate and we’ll talk? Until then, you got nothin’ dude. Nothin’.

      PS: And what’s with this “averaging” nonsense? The idea you can take the outputs of a hundred or more unique models, average them, and get anything meaningful? This is basic experimental stats, you can’t do that. It’s tacitly wrong. Braindead stupid. Who let these fools out of their cage?

    • The biggest problem with tuning the models to past climates is that most of the factors that impact climate are not known with any degree of certainty and the further back you go, the worse that problem gets.

      Let’s just look at aerosols, however most of the other parameters that are used for tuning are just as bad.

      How much was released in any given year and from where?
      There are many types of aerosols each of which has a different impact on the climate.
      Things like the height of the stack and weather conditions at the time of release have a huge impact on how long the aerosols stay in the air and how far they spread.
      In places like Europe and the US/Canada, little is known about how much and what types of aerosols were released prior to the existence of the EPA and companies were required to keep track of that stuff.
      For the rest of the world data is sparse to non-existent.

      As a result the “modelers” are permitted to pick whatever number is needed to make the numbers work.
      So yes, they are able to model historical temperatures, but it has nothing to do with whether the models are accurate or not. Just that they have enough wiggle room with their parameters to make it look like they are accurate.

      • “The biggest problem with tuning the models to past climates is that most of the factors that impact climate are not known with any degree of certainty and the further back you go, the worse that problem gets.”

        No, the biggest problem is it requires using an empirical model to predict a system’s behavior outside the period of observation. That’s a fundamental no-no in statistical modeling Mark. Never valid.

        Example: I have measurements of tree ring widths and temperature over a period of 100 years. I fit a regression model to those values. I have no physical theory to support that relationship, I simply observe agreement.

        I can, legitimately, use such an empirically derived model to predict the value of temperature given the width of a tree ring within that 100 year interval, but I cannot use that model to predict the value of temperature outside that time period. That’s “extrapolation” and it can’t be done using an empirical model.

        The procedure is statistically/experimentally invalid. We can never extrapolate from an empirical model. It’s a rule Mark.

    • “if they fail, that says nothing about the theory. it rather means, the model is wrong, not the theory”

      Good, so CAGW is just a theory, not an absolute irrefutable fact. Please tell Al Gore etc.

    • Steven Mosher
      August 11, 2017 at 2:04 am

      “In short. Destroy every model ever constructed. we still know what we knew in 1896 ( C02 causes warming, like ALL GHGs ) and we still know what steam engineer callandar knew: GHGs cause warming”
      ——————–

      Short…….and to the point, I think.
      No need for models, we do already know about GHGs, and can\t risk to have that confused by the models…..
      Besides, what else to do if we figure things that fast,,,,,,, 1896, is like yesterday…:)

      Thanks Mosher…

      cheers

    • Mosh,
      “because they violate what we have known since 1896”
      Who is the “we” you refer to in that statement? Because I’ve known for years that Arrhenius “knew” that the calculations he’d arrived at in 1896 were wrong. He publicly changed them in 1906. I’ve also known since viewing the website linked to below, that many of Arrhenius’s assumptions were wrong, his attributions to other scientists were false, his methods flawed, and that his theory of “backradiation” seemingly violates the laws of thermodynamics.

      http://greenhouse.geologist-1011.net/

      “However, Arrhenius’ calculations are based on surface heating by backradiation from the atmosphere (first proposed by Pouillet, 1838, p. 44; translated by Taylor, 1846, p. 63), which is further clarified in Arrhenius (1906a). This exposes the fact that Arrhenius’ “Greenhouse Effect” must be driven by recycling radiation from the surface to the atmosphere and back again. Thus, radiation heating the surface is re-emitted to heat the atmosphere and then re-emitted by the atmosphere back to accumulate yet more heat at the earth’s surface. Physicists such as Gerlich & Tscheuschner (2007 and 2009) are quick to point out that this is a perpetuum mobile of the second kind – a type of mechanism that creates energy from nothing. It is very easy to see how this mechanism violates the first law of thermodynamics by counterfeiting energy ex nihilo, but it is much more difficult to demonstrate this in the context of Arrhenius’ obfuscated hypothesis.”

      You might check out his Most Misquoted Scientific Papers section too.

  12. One point, that I have never seen discussed, is that the machine learning models have supposedly three phases.

    The first is the learning phase, where they run iteratively against the real historical dataset in order to produce a good fit.

    The second part is where they run against a more recent part of the historical dataset, that wasn’t included in the first phase, and demonstrate that they can track that.

    The third part is where they are allowed to run free and predict the future.

    The argument is that the second part ‘proves’ that the model can be trusted. In reality it merely forms an extension of the learning phase, because it would take a very peculiar person to release a model to the world that failed the second part.

    • “it would take a very peculiar person”

      Or maybe someone not smart enough to make an honest living.

      If you’re a third-rate student taking a PHD just because you can’t face leaving school wouldn’t you choose a subject where no-one questions your work as long as you reach the ‘right’ answer? It’s been going on for so long virtuaully everyone in climatology fits into that category.

    • Greg: first it’s important to understand how machine learning (the type you describe) works. Only then can you understand why a five year old who’s learned to tell the difference between a car and a cow and correctly classify a 1971 Porsche 914 as a “car” and not a “cow”, even though that child has never seen a 1971 Porsche 914.

      This is an example of poor reasoning by analogy.

  13. 1) Models are not useful to glean the future.
    2) Models are useful to highlight the things you do not yet know.

    If you claim that your models are accurate then you still can’t do 1) and you completely miss out on 2).

    • I agree with your comment in a general way, Ed, but when you plug in a significant number of variables you don’t really get anything that tells you which ones are right or wrong. If a baseball game goes 20 innings and ends 33-32-what single event, pitch, swing, catch, error, stolen base, injury, coaching decision, etc, decided the game. I have simplified for the purposes of modelling!

      • You know John, over the years we statisticians have made some (arguably barely useful) progress on that subject.

        If you look at the dark art of multiple regression (AKA Principal Factor Analysis in its more advanced form) you’ll discover the “F to Enter” test, which sets a threshold of acceptability for a variable to enter the regression. In essence, if the addition of a variable doesn’t significantly improve the model fit, it’s rejected.

        This has the unfortunate side effect of promoting what I call the “Cuisinart” approach to model development; the investigator collects all data that might possibly be relevant to determining the value of the dependent variable, pushes the button, and waits for the computer to spit out a model.

        It’s about as far from science as you can get and still use statistics.

  14. Climate Science is a religion.
    The faithful followers of Climate Science do not need any real world test or reality check.
    They just know.
    So don’t pester them with any test of falsification.

    • My thoughts too. I have always held that the climate models synthesise everything that is known, surmised and conjectured about the climate and what drives it. They seem to be collectively and individually wrong so there are gaps in the knowledge and/or errors with the theories.

      Good one w. What happens next?

  15. “Falsifiability is the idea that an assertion can be shown to be false by an experiment or an observation, and is critical to distinctions between “true science” and “pseudoscience”.”

    Wow, you can’t say it more clearly
    Climate science —》 pseudoscience

    But leter on he somehow forgets this statement and participates is this colective cognitive allucination we call AGW and comes up with:

    “Climate models are carefully developed and evaluated based on their ability to accurately reproduce observed climate trends and processes. This is why climatologists have confidence in them as scientific tools, not because of ideas around falsifiability.”

    pure lunacy!

  16. Willis asks:

    are climate models useful? And if so, just what are these models useful for?

    Taking the multi-model mean (MMM) as a proxy for all models, and concentrating on the CMIP5 surface models only (because I don’t have data for the lower troposphere models), then I would have to say that the climate models have been useful in at least one respect: they have correctly projected the direction of travel, i.e. continued warming.

    That sounds a little trite, because you might argue that, starting from 2005 as I believe the forecast periods in these models do, there was at 1/3 chance of continued warming anyway (the other options being cooling or zero change). But that observation has the benefit of hindsight. Recall that since 2005 there have several scientists and commentators predicting imminent cooling, based variously on changes in the PDO or solar output, etc. Don Easterbrook springs to mind; so too David Archibald, to name but two who had their cooling forecasts featured here at WUWT.

    Those cooling predictions have demonstrably failed. The CMIP5 models (again using the multi-model mean as a proxy for all the models) have remained inside the temperature projection envelope and have even by some measures exceeded surface temperature observations. For instance, the year 2016 was warmer in reality that was projected by the CMIP5 MMM; though using a longer rolling average the models are still on the cool side of the MMM; but not by as much as some here seem to believe.

    So I would summarise by saying that as a basic predictor for the long term direction of surface temperature travel, the CMIP5 surface models have been pretty useful; certainly much more useful than those models generated around the same time that foresaw only cooling.

    • How would we know? They have used models to adjust the recorded station data, thus making the surface data simply another subset of the models. The surface data is constrained somewhat by the underlying readings taken in the real world, but only somewhat. The fact that the UHI night-time warming trend from the city gets smeared across all the rural sites during the homogenization and the fact that other model based adjustments are made means that the “empirical baseline” data is already woefully polluted.

      • OweninGA

        How would we know? They have used models to adjust the recorded station data, thus making the surface data simply another subset of the models.

        We know that from 2005 onwards the warming trend in the the surface data sets is consistent with the warming trends seen in the lower troposphere data sets. UAH is the coolest, but it’s still 0.20 C per decade warming since 2005. The other satellite TLT set, RSS, shows 0.23 C/dec warming over the same period; the same rate as HadCRUT4. GISS and NOAA are only fractionally warmer (0.25 and 0.26 C/dec respectively).

        So if we’re going to say the surface data has been improperly adjusted upwards since 2005 then we’re also going to have to call out the satellite data sets for doing the same thing. The alternative is that both are right, and it really has warmed at a rate of 0.2 – 0.25 C/dec since 2005, roughly what the surface models projected.

    • Your time series results include several large El Nino events which are natural and have nothing to do with increasing atmospheric CO2 and whose effects are not included in any CIMP5 model. Yet you include them without mention in your conclusion about modeled surface warming. Very strange.

      • Doonman

        Your time series results include several large El Nino events which are natural and have nothing to do with increasing atmospheric CO2 and whose effects are not included in any CIMP5 model. Yet you include them without mention in your conclusion about modeled surface warming. Very strange.

        The time series runs from 2005, since when there have been 3 El Nino and 3 La Nina events:

        What’s ‘very strange’ is that you choose to mention the natural warming effects of El Nino periods but chose to ignore the natural cooling effects of the La Nina periods. Are you saying we should subtract all the natural warming from the observations but shouldn’t compensate for the natural cooling? Sounds like a good way to introduce a cooling bias.

        As far as I know the models do incorporate ENSO events, though in a random way since obviously the exact timing of such events can’t be predicted. This is one of the reasons for the variation in the model outputs.

    • DWR54,
      It isn’t sufficient to have the sign of the trend correct. To be useful, there must be a very small quantitative error in the slope of the trend. It makes a large difference in the proposed response if there is one or two orders of magnitude difference between reality and the modeled reality.

      • Would you say that the CMIP5 models have been more or less useful than the several other models initiated around the same time that projected cooling over the same period?

        I would say ‘more’ useful.

      • DWR54 August 12, 2017 at 2:08 am

        Would you say that the CMIP5 models have been more or less useful than the several other models initiated around the same time that projected cooling over the same period?

        I would say ‘more’ useful.

        Thanks, DWR. First, which are the “several other models initiated around the same time” that are NOT part of the CMIP5 group?

        Second, you say they’ve been “more useful” … but for what? What actual, actionable information have we gotten from the CMIP5 models?

        Regards,

        w.

      • Hi Willis, thanks for the response.

        I was referring forecasts by, specifically (since they were featured on this site), Don Easterbrook and David Archibald. Archibald’s was published in the trade journal Energy and Environment; Easterbrook made his cooling forecasts on blogs only, as far as I can tell.

        Insofar as they predicted continued warming, the CMIP5 models have been useful. If you were a betting man in 2007/8 (which I’m sure you’re not) and you had a 3-way choice to bet on:-

        1. Continued warming;
        2. No change; or
        3. Cooling

        Then you would be a happy man had you listened to the CMIP5 model projections. Less so if you had paid attention to Easterbrook or Archibald.

    • Please explain the pause using model inputs consistent with those used in the model which provides the most accurate hindcast.

      • A ‘pause’ or even an ‘acceleration’ are easy enough to generate in any global temperature data set due to natural variability, such as ENSO or aerosols, etc, provided that the period chosen is of a short enough duration.

    • DWR54 writes: “Taking the multi-model mean (MMM) as a proxy…”

      You understand that “taking the multi-model mean” is a procedure that’s so far beyond acceptable in the realm of science and statistical methods it should never have been published in a respectable journal?

      Seriously. What you support, the method proposed, is completely without merit. It’s absolute junk. The worst sort of lie.

      • The multi-model mean is just a way of averaging the output of all the models. It’s been used many times by authors on this very website, such as Bob Tisdale and others. There’s nothing wrong with it, per say.

        If you prefer spaghetti graphs then you can just run the whole ensemble and add the observations to those. They will be somewhere in the middle of the pack.

      • DWR54 August 12, 2017 at 2:13 am

        The multi-model mean is just a way of averaging the output of all the models. It’s been used many times by authors on this very website, such as Bob Tisdale and others. There’s nothing wrong with it, per say [sic].

        Well, yes, there is something wrong with it. Unless we know that the models are a) independent and b) completely explore the parameter space and c) have been verified and validated, it’s just garbage in, garbage out. However, for the CMIP5 models none of those is true.

        If you prefer spaghetti graphs then you can just run the whole ensemble and add the observations to those. They will be somewhere in the middle of the pack.

        You appear to be mistaking graphing an average of the models (which you surely can do), for that average having some meaning and some greater validity.

        Regards,

        w.

      • DRW54 writes: “The multi-model mean is just a way of averaging the output of all the models.”

        Yes of course it is. But why would you rationally combine the average length of a fish with the average length of a mammal? You wouldn’t. Why?

        Because it tells you nothing. If you average the length of a Pacific Smelt with the length of an African Giraffe, you’ll certainly get an arithmetic average, which tell you exactly nothing about smelt or giraffes, and that would be the point.

        This is very basic statistics DWR, very basic. Would you like a complete treatment of this subject? I suggest Box, Hunter and Hunter, “Statistics for Experimenters”, 4th edition.

        Not rocket science DWR, but science anyway.

      • Willis Eschenbach

        You appear to be mistaking graphing an average of the models (which you surely can do), for that average having some meaning and some greater validity.

        Rather I was simply using the multi-model average as shorthand for the model spread as a whole. What’s wrong with that exactly? Bob Tisdale did it here for years without any negative comments that I’m aware of.

        The average doesn’t necessarily have some greater meaning or validity; it’s just a handy way of showing how the models, as a group, are doing against observations.

      • Bartleby

        …why would you rationally combine the average length of a fish with the average length of a mammal? You wouldn’t. Why?

        Erm, who’s doing that??

        I’m comparing the average of all the CMIP5 models with observations.

        Not rocket science DWR…

        Indeed.

      • DWR writes: “What’s wrong with that exactly?”

        What’s exactly wrong with that is you’re pretending different things are the same. Smelt aren’t Giraffes, even though the math lets’ you average them. In the same way, no two climate models are similar enough to support the idea of a meaningful average.

        I mentioned the book, it seems you may not have consulted it? You need to study the subject a bit or you can just take my word for it. I’ve summarized here but if you seek validation you’ll need to study a bit more. I can say without doubt that what’s being done by “averaging” the output of various climate models is pure junk.

    • Do they tell us how many runs they throw away? How many fail sanity checks and are aborted, how many produce an unwanted result and are stuck at the back of a drawer somewhere? Is there a set of limits imposed on the models during or post run?

  17. Rule one of climate ‘science’ , if the values differ between model and reality , it is always reality which is in error , takes care of any such problems.
    For who needs facts when you have ‘faith ‘

    • [ if the values differ between model and reality , it is always reality which is in error ]

      Reminds me of an Andy Rooney anecdote when he said he remembered an event that happened while he was a field reporter during WW2. He referenced his notes from the time and saw that they contradicted his memory, so he concluded that his notes were wrong.

  18. … just what is it that are they [models] good for?
    Any story you want to tell.
    It’s not the models per se. They’re deterministic expressions of ideas, as you say. It’s the premises they assume that are the culprits. Models obscure the foundations of the the argument by substituting a simplified, easier-to-grasp visual representation. The caveats and erroneous assumptions are lost to the conclusion. Sometimes these defects don’t matter — think fluid dynamics models engineers use — where model output is good enough for the design objective. In climate science it does matter because model results are incapable of identifying the causes of change, the actual design objective. The best, what they can do is to find an “association” between measured variables — and then only with poor predictive power. At worst they tell a story that borders more on fable than non-fiction.

    • Gary writes: “think fluid dynamics models engineers use — where model output is good enough for the design objective.”

      Gary, there’s a very large difference between the uncertainties of computational fluid dynamics (or, for that matter, thermodynamics) and the current “state of the art” in climate modeling.

      As you mention, CFD modeling is useful. It isn’t precise, which is why we have wind tunnels, but it’s useful. The reason it’s useful is that it’s based on accepted theory drawn from physics. Not only is it based on accepted theory, it’s limitations are well understood. It’s a limit of mathematics that prevents CFD from being entirely predictive. That limit is summarized as the Navier-Stokes problem.

      Climate models don’t have this excuse (though they do have the same problem), nor are they “usefully” predictive. In fact they’re so wrong they’re laughable. They should have been ash-canned years ago. There is no underlying physical theory to support them, which is why they “aren’t even wrong”.

      I’m a bit tired of seeing this comparison made, please excuse me.

    • Thanks, the paper Willis cited is well written, but the subject matter is obscure and requires more than a few minutes thought jammed in between pulling weeds, washing the mutt and hooking up another wire in the remote entry unit I’m trying to wedge into an ancient Nissan, . I’ll be interested to what Munshi might have to say on other subjects.

  19. I think models are essentially like an equation, they produce an output provided the parametres are correctly known and the equation is correct for that model.

    So x+y=z.

    However just like in mathematics, the values for x and/or y might be unknown, as well as the equation itself might not be relevant with respect to the value of z, therefore the model can only be useful if x and y have high certainties, and the equation is also appropriate. So 1+2 =3 but 1x2does not =3. And if either x or y is unknown then one doesn’t know what z is. (eg x+2=?).

    The more uncertainty and the more parameters the less likely the output is correct. And one also has to know the correct equation is being used.

    Models are really only useful where there is reliable information on parametres x,y; there are fewer and high certainty variables, and the equation is appropriate, meaning there is only a small degree of uncertainty in which the model then addresses and functions to provide the output. It essentially fills in gaps where small uncertainties exist, it does not fill in large uncertainties. With high degrees of uncertainty in either parametres or equation the models provide no reliable output or just false outputs.

    • Keep in mind that your equation depends on what is being combined. 1 + 2 does not equal 3 if X is units of water and y is units of alcohol. And a model is not the system being modeled.

    • Thingodonta,
      You left out the ‘tuning factor’ in your model! It should be x+y+k=z. With the appropriate selection of k you can get any result you need! (Do I need to add /sarc?)

  20. “they have wildly overestimated the changes in temperature since the start of this century”

    They’ve been wildly overestimating the changes in temperature for a lot longer than that.

    • Hey, they can wildly overestimate temperature changes hundreds of thousands of years into the past or future if you so desire. Try doing that with any other tool.

  21. “Climate models are carefully developed and evaluated based on their ability to accurately reproduce observed climate trends and processes. This is why climatologists have confidence in them as scientific tools, not because of ideas around falsifiability.”

    By this statement the “scientist” has essentially admitted incompetence for herself and all like-minded colleagues. They might as well admit that they are working in another field that is not part of what we recognize as science. Perhaps “climastrology” fits the bill.

  22. I can only re-iterate my earlier comment wrt models: Models are great for interpolating between observed and matched data; they’re results must be increasingly suspect as they extrapolate beyond the observed data…

    • “I can only re-iterate my earlier comment wrt models: Models are great for interpolating between observed and matched data; they’re results must be increasingly suspect as they extrapolate beyond the observed data…”

      The same can be said for polynomial data fits and similar techniques. So, we’re saying that GCMs are a costly and complex way to generate dubious results that could be obtained with a lot less effort using alternate approaches?

  23. This issue goes to why I am always commenting that any graph of climate model results must show the date the model was run as a bright vertical line so that the reader can distinguish between hind-casting (which any model can do) and forecasting.

  24. “However, there is a lovely new paper called The Effect of Fossil Fuel Emissions on Sea Level Rise: An Exploratory Study ”

    It’s an interesting paper. Since only Brest amongst their 17 data sets shows sea level to be correlated with emissions, I reckon that science has proved that France is doomed. Should we tell the French or let them live in happy ignorance?

  25. “For some time now, I’ve said that a computer model is merely a solid incarnation of the beliefs, theories, and misconceptions of the programmers.”

    True enough. Plus which, unless the model is trivial, the code is likely to contain unintentionally flawed representations of those beliefs, theories, and misconceptions — i.e. “bugs” The bugs which cause whacky results will almost certainly be tracked down and fixed. Bugs with more subtle effects very likely will not. The closer the models plus bugs match the anticipated results, the less likely the bugs are to be identified and exorcised.

    • It has been claimed that “all non-trivial computer programs have bugs.” The extermination of the bugs becomes more problematic with increasing complexity of the program, and, as you point out, less likely to be even identified as the output more closely matches what is expected.

      • “all non-trivial computer programs have bugs.”

        Except mine. My programs have no bugs.

        And I don’t need to roll down the windows when I phart either. Ask anyone. My dog has a problem with that, but I don’t.

    • Afterthought (not that anyone will read it). I think there is probably something somewhat akin to confirmation bias in play here. (Minor) bugs that cause results to be what the modeler expects/hopes for are possibly somewhat less likely to be found and fixed than similar bugs that don’t support the expected result. Given the obvious warmist bias in climate “science” that might result in the models running a little hotter than they might in an ideologically neutral environment.

  26. I’m always amused by the argument that a climate model is reliable because it fits past data. If a model that fit well with past data was a reliable predictor of the future behavior of the system, nobody would ever lose money in the stock market.

  27. You ask a good question: “then … just what is it that are they good for?”

    The answer might be found in a TED talk given by Gavin A. Schmidt. He argued that the models were “artful”. As art they are mere decoration open to the interpretation of the viewer. Some people want to find deep truths in them, but that is only a reflection of that persons own thoughts.

    • Richmond,
      Some have complained that they can’t define art, but they know it when they see it. I’m afraid that I fail to see the “art” in the extant GCMs.

      • OK, perhaps the only art is in the mind of Gavin Schmidt. On the other hand, as artful deception, the GCMs do seem to work on many people. This is like the art of all messaging, some advertising campaigns work better than others. Some zombie ideas resist all efforts to kill them off. Just look at the alleged consensus of scientists that never existed. You should be congratulated on failing to see wonderful clothing that are all imaginary and only visible to the true believer in the GCMs

    • Even that admission is disingenuous as the models are waved around as the explicit description of the future and the rationale for a war on modern civilization. The proponents of this assault may be largely unable to see that the economic commotion they are instigating would be disastrous, but that is because they are Leftist by inclination and the history of Leftist government is unforeseen economic disaster. They usually blame this on Capitalist enemies instead of examining the realities of human economic interaction.
      The Socialist ideal is not achievable and not even desirable as a degree of contention and competition is essential for human civilization to progress. When Capitalism raises hundreds of million out of poverty the notion of throwing out the baby with the bathwater becomes the critical consideration. Eco-Socialism is power mad immorality!

  28. Steven Mosher August 11, 2017 at 2:04 am Edit

    “Indeed. The models are an expression of the theory that CO2 causes warming. As a result, they are less than useful in testing that same theory.”

    Except that is not the theory.

    The Theory is that the climate of the planet is the result of

    ALL external forcings
    AND
    Internal variability.

    So if you build a model of the planet and you only include solar forcing and say volcanoes… Guess what?
    Your climate model will really suck…

    So you add Methane
    it gets better
    you add HFCs
    it gets better
    you add C02

    And Dang, you can model this very complex thing and get answers that correct to within small percentages.

    Now the climate model doesnt represent the whole theory. The same way CDF code doesnt represent and cant represent the flows of fluids. Further Nobody believes in the theory ( climate results from external forcing and internal variability) BECAUSE of the model. And finally if models were just crazy wrong, if they showed cooling from the addition of c02, we would know the models were wrong. because they violate what we have known since 1896. In short, models add nothing to the foundation of our knowledge. and if they fail, that says nothing about the theory. it rather means, the model is wrong, not the theory that it was struggling to represent.

    In short. Destroy every model ever constructed. we still know what we knew in 1896 ( c02 caises warming, like ALL GHGs ) and we still know what steam engineer callandar knew: GHGs cause warming

    Mosh, thanks for your reply. First, you say:

    Destroy every model ever constructed. we still know what we knew in 1896 ( c02 caises warming, like ALL GHGs ) and we still know what steam engineer callandar knew: GHGs cause warming

    Sorry, but that is not true at all. You mistake a change in forcing for a change in temperature. That is a bridge way, way too far. What we do know is that GHGs cause increased forcing. Period.

    However, given that the climate RESPONDS to changes in temperature; and given that tropical cloud albedo is highly correlated with temperature; and given that the increased forcing from a doubling of CO2 would be totally counteracted by a 1% change in albedo; and given that thunderstorms are also correlated with temperature and cool the surface in a host of ways; given all that and more, we have no reason to assume a priori that a change in GHG forcing will perforce change the temperature, and we have a heap of evidence to show that that may NOT change it.

    On the most basic level, I know of no other complex natural system where the output is a lagged linear function of the input as climate models claim. Complex systems are … well, they’re not that simple, which is why they are called “complex”.

    As one of many examples of such evidence that the prevailing theory is incorrect, if forcing change is linearly related to temperature change as the theory falsely claims, volcanoes would have a huge effect on the global temperature … but as I have shown repeatedly, if you don’t know when the volcanoes occurred it is NOT POSSIBLE to identify them in the global temperature datasets.

    Next, consider: almost every model in the CMIP5 dataset used a different set of forcings as input … but all of them are able to reproduce the historical temperatures … how can that be true if a) the models are based on “physical first principles” as is often claimed and b) temperature changes are linearly related to forcing changes as the prevailing theory falsely claims?

    Next, you say:

    So if you build a model of the planet and you only include solar forcing and say volcanoes… Guess what?
    Your climate model will really suck…

    So you add Methane
    it gets better
    you add HFCs
    it gets better
    you add C02

    And Dang, you can model this very complex thing and get answers that correct to within small percentages.

    Yep. All you need to do is choose which aerosol dataset and which methane dataset to use, and whether or not to include HFCs, and then tune your model, and Dang … perhaps that impresses you. For me, that’s merely evidence that the “model of the planet” that you started with has serious problems, and that you are just messing with tunable parameters.

    Why is it that no climate model is ever trained on half of the historical temperature record, and then tested to see if it works on the other half of the historical record? Every other scientific model is tested that way, but noooo, not climate models. The world wonders …

    Moving on, you said of my words as follows:

    “Indeed. The models are an expression of the theory that CO2 causes warming. As a result, they are less than useful in testing that same theory.”

    Except that is not the theory.

    The Theory is that the climate of the planet is the result of

    ALL external forcings
    AND
    Internal variability.

    OK, you want to get picky, fine. The theory is that CO2 and other forcings cause warming. Since “internal variability” is assumed to cancel out in all the climate models, I’m not sure why that is relevant, but sure, toss that in as well.

    Happy now? It doesn’t change the underlying argument by one whit, but if it makes you feel better, fine.

    However, all of the models have different external forcings and different “internal variability” (whatever that might mean) … but they all hindcast the past equally well. How is this not a huge problem in your world?

    The problem is that because they ONLY contain external forcings and internal variability, the models all lack the thermoregulatory mechanisms that I have provided heaps of evidence for—cumulus clouds, thunderstorms, dust devils, the El Nino/La Nina pump, squall lines, and all the rest. Many of these are not included because they are “sub-gridscale”, smaller than even the smallest of the climate grids in the most detailed models.

    So you are trying to model the climate while leaving out a large range of crucial phenomena because they are too small … and you see no problem with that.

    Truly, model enthusiasts like you live on another planet. It’s called “ModelWorld”, and it has one bizarre characteristic. Unlike the real world, it is linear, with the claim that changes in temperature are a linear function of changes in temperature.

    And that wouldn’t be a problem, but over and over you guys keep insisting that ModelWorld is enough like the real world to serve as a valid proxy for the real world in a variety of calculations.

    Sadly … it’s not, and it doesn’t.

    w.

    • What we do know is that GHGs cause increased forcing. Period.

      At the risk of sounding pedantic, that’s a physically incorrect statement. Since GHGs don’t SUPPLY any energy to the planetary system, but merely increase the opacity of the atmosphere to LW radiation, the total system forcing (TSI) remains UNCHANGED. Because of the dominant role of other mechanisms (primarily evaporation) in heat transfer, only the atmosphere’s ABILITY to retain radiated terrestrial heat is increased–not necessarily its actual heat CONTENT at any given time-scale. The whole AGW meme arises from the misguided notion that radiative transfer in the atmosphere is dominant in setting the surface temperature.

      • “The whole AGW meme arises from the misguided notion that radiative transfer in the atmosphere is dominant in setting the surface temperature.”

        When in fact we can very easily demonstrate the principal mechanism of energy transfer in Earth’s atmosphere is convection. It can be done in a lab and it’s measurable in vivo. The focus on radiative transfer models is misguided and also misdirects the attention of the general public.

      • Indeed, moist convection–strikingly evident in the formation of cumulonimbus–is the principal means of heating the upper troposphere. At lower levels, oceanographers established in the 1970s (IIRC) that heat transfer by surface evaporation exceeds all other mechanisms combined.

  29. Willis,

    This is a “Mommie, the Emperor isn’t wearing any clothes” respite from the gobbledegook I’ve been wading through lately and finally decided isn’t worth my time deciphering, no matter how ignorant I may be of sufficient maths skills to read papers deeply and therefore the constant need for educating and querying.

    Coincidentally, I’ve been studying sea level data and acceleration claims for the past three weeks–they’re in the air–so this intrigued me off the top.

    I concur completely with Munshi. You can’t use a pencil to describe a pencil (his pt #1).

    Anyway, thanks. Like having one of grandma’s lemonades at 5 pm on a hot August afternoon, and discovering she slipped in some Jamaican rum.

  30. Mosh,

    The Theory is that the climate of the planet is the result of

    ALL external forcings
    AND
    Internal variability.

    Then why can’t they model clouds, wind patterns, and ENSO (which Jim Hansen said he chose to ignore in his models because ‘it was too complicated’)?

  31. I called up Dr. Bill Grey once, and he reeled off a list of the things that models absolutely can’t “model,” because meteorologists, forecasters, and scientists don’t know this information–it ain’t available–at more than an outside extreme of six months (for certain wind flow data). Can’t find the list right now.

    And he was just talking about natural variability.

  32. Willis writes: “I’ll leave it there for y’all to take forwards.

    I’m sure you know there was a recent article on this site discussing this very subject; the role of epistemology in scientific discourse. In it, the author concisely stated the case that any person conversant with the scientific method was qualified to find fault in its application. That would be the exemplified by the arguments made by Dr. Sophie, who apparently believes the falsification (“falsifyability”?) of a scientific hypothesis is no longer relevant in a post-science world of five year old females who aspire to become scientists. It isn’t required because she says that’s just not the way she does it; cogito ergo sum.

    You’re just doing it wrong because you’re so dumb. I flat-out refuse to demonstrate how to do it right.

    Hope that’s clear now. Toddle off and let the adults speak.

  33. Very good point W. Using models to test the theory is the same as using Muons to test relativity, that is, a logical fallacy, A lending to B so B can prove A correct

  34. just what is it that are they good for?

    I have to spell it out for you?

    OK, Here it is:

    Getting research grants form the Federal Government, and getting tenure.

    In every time, and at every place it is always about the Benjamins. That is all.

    • Still simpler: a model for the result of coin flips. There are very few variables: the side that is up before flipping, the upward acceleration of the flip, the angular momentum imparted to the coin, and the surface it lands on (other potential variables can be eliminated by doing the flips in a controlled environment). Should be simple, no?

      Here is my rule: you cannot create a predictive model of a system that has multiple independent variables.

    • Mr. Simon:

      The allusion to models predicting baseball scores is inappropriate as modern climate models do not predict. Instead, they “project.” When a journalist, left leaning politician or statistical neophyte sees the word “project” his or her brain automatically converts it to “predict” as he or she is unaware of the fact that there is a difference in meaning. Unless they are incompetents climatologist know there is a difference but they never in my experience correct the error. Why not? An attractive hypothesis is that to correct the error would be bad for business.

  35. I have a computer model that suggests the planet will be attacked by wave after wave of aliens and it’s hopeless because every time we destroy one incoming wave the next is stronger and faster, the consequences are inevitable – we will lose and we will all be killed – the model proves it. Oh wait – I’ve just realised I was playing space invaders. Panic over.

  36. Willis
    This caused me to remember a post you did some years back that indicated the top of atmosphere radiation input output measurements had been adjusted to agree with the models.

  37. It is quite common to create models in some fields like process engineering. (In 1974 we had a connection from central Queensland to Chicago using fencing wire to manual exchanges to satellites to model a pilot plant process using chlorine gas at 1040 deg C to strip unwanted iron out of the beach sand mineral ilmenite.
    Ten tons a day of chlorine in the middle of a town is serious for accidents, so we were thorough).
    So what is a model for? Mainly, it is a test to see if you can create a mimic of what you know of a process using parameters that you think are enough for a complete description. If you run your model and it does not give the expected outcome, you have several options –
    . junk the model
    .refine the model
    .overlook the error
    It seems to get forgotten that the evaluation of the model is usually how close it comes to expectations. There are a few occasions when the model tells you new or unexpected information, which you then have to work on further if you want to get your sunk money back. If the model does not confirm your expectations, it can be because your expectations are wrong or because you have not calibrated well enough with the right parameters.
    Creators are loath to junk models. When they give not quite the expected response, some creators try to push through saying this is state of the art, cost a bundle of $$$, is the best anyone can do, uses a $100 million supercomputer, etc etc. Such marketing exercises are not really excusable. They are vanity exercises. A coverup is a coverup.
    It would help if model creators used proper, formal error analysis all the way through.
    (I am trying to support the main Willis contention here using words and examples from my past.) Geoff

    • Geoff
      How close the model comes to expectations is the question of accuracy and differs from the question of the truth or falsity of the conclusion of an argument. Modern day global warming models reach no conclusions thus not being falsifiable.

  38. Hi, new guy here so I want to apologise if this idea was covered already…..

    There is something missing in the Climate Debate….. Religion.

    There are many, many people who look at the issue of Climate Change and claim some variation of “the science is settled.” These people have switched from a scientific point of view to a religious point of view. Lemme ‘splain…..

    People who have faith believe in their God(s). It doesn’t matter how much evidence you give against their beliefs, they will keep believing in their God(s).

    Climate Change has become a new God, well a new faith to be accurate. The belief that Climate Change is real and serious is immune to any scientific method or research or inquiry. The believers will not be swayed, the heretics who don’t believe will be cast out of the temples.

    If you want to crack the shell of the Climate Change group you have to attack their beliefs, not their science.

    • +1 ^^

      “If you want to crack the shell of the Climate Change group you have to attack their beliefs, not their science.”

      And of course we have Maslow to tell us this is very, very hard. As a result I personally have little faith in the idea the rationalists involved in this debate are winning; they aren’t winning. At the very best they’ve sent their opponents into the sound proof booth for a little while.

  39. Unbelievable. Look. If the model is not falsifiable IN THEORY then it is pseudoscience. Period.

    Sure, just because your model’s predictions cannot be tested in the present does not render your model pseudoscience. Nevertheless, it should be plausible that your model’s predictions can be tested sometime in the foreseeable future.

    In other words, the IN THEORY bit must have a pragmatic asepct to it.

    • RW, it’s important to recall there really was not just popular support for a flat earth, but also authoritarian support. Equally, the heliocentric theory wasn’t generally accepted by the masses or the reigning intellectual authority.

      It has happened in the past, it will happen again.

  40. Mr Eshenbach’s suspicion is correct. The claims of the climate models are not falsifiable thus global warming climatology is not really scientific. IPCC AR4 explains that falsifiability is an outmoded concept that in the modern era has been replaced by peer review. This, however,is an outrageous lie!

    • Your error is equating climate models with climatology. The study of climate is more than mere models. In fact models cannot be “falsified”…because a model is either useful, or it is not useful. A good analogy to illustrate your error is equating a hammer with carpentry. Carpentry is much more than a hammer, it includes rulers, saws and other TOOLS. Models are the tools of climatology, so your supposition that a model can be falsified makes no sense.

  41. Mark
    Usefulness is unrelated to falsifiability. Falsifiability is a property of a statement that has a property that is called its “truth value.” The truth value takes on the values of “true” and .”false.”

  42. Every last one of you who claimed ‘the basick sights is sownd’ should be strung up from a light pole as a molester of science years ago.

    That includes you Eschenbach: the fakes like you are the ones who brought your pseudo-science religion into the sewer. It’s not the people who told you that your idiocy is wrong, whose reputations are dimmed.

    It’s the people who claimed insulation making less light warm a rock, can make sensors depict more light warming it every time the insulation makes less even reach it to warm it.

    • Will Greenberg August 16, 2017 at 10:26 am

      Every last one of you who claimed ‘the basick sights is sownd’ should be strung up from a light pole as a molester of science years ago.

      That includes you Eschenbach: the fakes like you are the ones who brought your pseudo-science religion into the sewer. It’s not the people who told you that your idiocy is wrong, whose reputations are dimmed.

      Off our meds again, are we? Will, I politely requested above that you QUOTE THE EXACT WORDS YOU ARE DISCUSSING. You apparently think that doesn’t apply to you.

      It’s obvious that something has your knickers in a royal twist. It is not apparent what that might be.

      Come back when you have something real to discuss, and lose the ‘tude, OK? It doesn’t look good on you …

      w.

  43. One final thought. This entire discourse is Much Ado About Nothing.

    The implications from CERN CLOUD experiments are that CO2 does not play a significant role in global warming, climate models used by the IPCC to estimate future temperatures are too high, and the models should be redone. A likely conclusion is that Newtonian physics cannot model earth processes adequately to produce actionable results. Ever! Extrapolating from a few opinions of physicists, the application of particle physics in climate modeling is far too expensive to pursue. Solving the climate change conundrum before the world wastes 100 trillion dollars running in the wrong direction is the major problem of climate science.

    A simple thought experiment tells me, if the main goal of climate modeling is to predict the earth’s long-term temperature, research should focus entirely on the analysis of the time-series of earth temperature observations. Attempting to predict the earth’s temperature by modeling a myriad of complex interactions and processes that describe the solar system is ludicrous.

    An alternate approach is to assume the solar system is a black box and to focus entirely on modeling the measurable output, the earth’s temperature. If one cannot model the output of a complex system, what is the likelihood that the complex system itself can be modeled? Modeling the solar system may be a lot more fun and lead to a lifelong career, but the amount of progress will probably be like the distance one can travel on a stationary bicycle, zip.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s