The Calamity of Models – Podcast with Willis Eschenbach

Our resident polymath Willis Eschenbach joins Heartland Senior Fellow Anthony Watts to discuss the parallels of hysteria and failure surrounding climate models and the coronovirus model that effectively put the world on hold.

Problem is, it’s worse than the old computer programmers adage “garbage in/garbage” out this time.

I’m sure you’ve heard about Neil Fergusson, the philandering epidemiologist from Imperial College who created the COVID-19 model that governments used to make lockdown decisions. Turns out, it was hugely flawed, and the code was a train wreck.

As a result, Neil Ferguson’s COVID-19 model could be the most devastating software mistake of all time. Meanwhile, Willis has been graphing the true nature of this epidemic on a regular basis here at WUWT.

Even in climate science, the gloom and doom worst case scenario has been seen to need a dialing back, except that it’s no longer climate “science” these days, and instead of dialing back, the climate dogmatists required a 50% increase in future temperature predictions.

These unreal models are affecting real lives.

126 thoughts on “The Calamity of Models – Podcast with Willis Eschenbach

  1. Question, does a PHd in science automatically make you a good software engineer?
    Answer no it does not , and that is the issue many of those behind these complex models have nowhere near the skills in coding required to create these models in the first place.
    Hence one very good reason , for buggy poor code that often results in models can not hit a barn door when standing next to it.

    • The problem is they are their own customer. They like the junk output their models produce because it keeps them employed by spewing out evermore nonsense climate predictions. Their political handlers want alarmism. The political class then uses OPM to pay the Climate Dowsers their wages to produce divinations.

    • I’ve started three software companies over the years and my very best two software engineers were an English and a Math major. I’ve found you either have “it” or you don’t, regardless of what formal education you have had. Something about tying problem solving, visualization, logic, attention to detail and organization into one package. A really great coder has it all mapped out in their mind before actually writing the code. Plus object based languages have also changed the landscape.

      We would try to keep the number of people working a project to a minimum. One senior person could put out as much code as 3-5 junior people and it would work generally out of the box.

      Glad to hear Willis! A voice behind all the words I’ve read of his over the years.

      • They declare that unless someone can prove their theory wrong (and they are the final judges on whether or not you succeeded), then everyone has to agree that their theory is the correct one.

        Sounds like someone else we all love.

        • First of all nobody has show the theory “wrong,” and nobody has provided a “better” theory.

          • Henry, you forgot the “sarc” tags.

            Anytime you look past the fake science the alarmists put out, it gets obvious carbon dioxide cannot be more than a “bit player,” an “under five,” if it matters at all.

            My own epiphany came when I looked at absorption spectra and noticed how water vapor swamps everything else.

            Then there is the matter that CO2 follows temperature at all time scales. Absolute falsification unless you think Isaac Asimov wasn’t joking when he wrote “The Endochronic Properties of resublimated Thiotimoline.”

          • Better in what way? Their statistics is often shown to be a fishing expedition for data that is filtered for significance. Another layer of “novel” statistical processing is usually done for enhancing a signal that isnt statistically compelling. M.Mann gave us a master class on how to do it.

            Have you seen one article that says we investigated this and it turns out to be inconclusive. No, everyone is a home run with a scrabble salad of words like “robust”, unprecedented, “worse than we thought”.

            You see Henry, ” no one has provided a better theory” is not a useful standard at all. For example: “The scientists produced a forecast 30yrs ago that proved to be 300% too warm when compared to observations. A forecast of no change would have been more accurate.” That is to say, their work was useless, but it’s the best we’ve got!

          • And out jumps Henry to demonstrate my point.

            There have been thousands of people who have “proved the theory wrong”.
            You don’t have to provide a better model in order to prove that the model under question is broken.

            If Henry and Loydo didn’t exist, we skeptics would have to invent them.

          • Henry Pool:
            First of all nobody has shown the theory “wrong,”

            James McGinn:
            Nobody has every established it as a theory.

            Henry Pool:
            and nobody has provided a “better” theory.

            James McGinn
            It isn’t a theory. It’s pandering.

            James McGinn / Genius
            Much of Science Involves Models That Have Been Dumbed-Down to Pander to the Public
            https://anchor.fm/james-mcginn/episodes/Much-of-Science-Involves-Models-That-Have-Been-Dumbed-Down-to-Pander-to-the-Public-e9c1vd

  2. The obvious indicator that the climate modelling community is simply a “barge load” of crap is that there are so many of them all with there own different predictions of ECS. Forget for the moment that the sensitivity can be tuned to whatever they want. Then the IPCC employs the ultimate crap blender set on puree like some bass-o-matic of all those outputs via the CMIP process to come up with an average for the consensus they hoist up like some scripture from divine dogma that can’t be question without being called a blasphemer, heretic, or worse.
    It’s truly just junk masquerading as science where everyone keeps doing it because not to do would end the paychecks.

    • You’re right, Joel.

      Here’s a table of CMIP5 models, from AR5:
      https://sealevel.info/AR5_Table_9.5_p.818.html
      (Source here, or as a pdf, or as a spreadsheet, or as an image.)

      The ECS values baked in to those models vary from 2.1 to 4.7 °C per doubling of CO2. The TCR values baked in to those models vary from 1.1 to 2.6 °C / doubling.

      Such an enormous spread of values for such a basic parameter proves they have no clue how the Earth’s climate really works.

      What’s more, that’s just within the IPCC’s community. It doesn’t include sensitivity estimates from climate realists.

      • All these models are deterministic – they are not Monte Carlo models that give a statistical answer. If the “science is settled” then, why do these models give different answers? Wildly different answers? After all, we know the science, don’t we?
        Just askin’.

        • DrEd,

          Good observation!

          Deterministic models can’t give a Monte Carlo analysis by themselves. If their outputs are different for different runs when no inputs are changed then how do you determine what the “true” value should be? It means the models are *not* deterministic but, instead, are dependent on the state of the system the models are being run on at the time.

          When I worked for a major telephone company we would do Monte Carlo analysis on all capital expenditure projects. We would vary one input at a time across a multiplicity of values in order to determine the sensitivity of the project to each input. We would then vary two inputs at a time to do the same. Then three. Etc. Thousands of runs. At the end we knew pretty well what the projects were most dependent on for success.

          It doesn’t appear that the current climate models are subjected to the same kind of Monte Carlo analysis that we used. That truly makes their results questionable on a statistical basis.

      • Despite enormous associated direct and opportunity costs, that spread of numbers hasn’t changed in forty years:
        “… Published in 1974, the new model found the amount of warming at Earth’s surface caused by a doubling of CO2, known as climate sensitivity, would be around 4°C. That was double the figure that came out of Suki Manabe’s NOAA model and so, in 1979 a US government-commissioned report split the difference between them. Predicting that CO2 levels in the air would double over the next century, it forecast temperatures would rise by 3°C, plus or minus 50%, giving a range from 1.5-4°C …”.
        https://simpleclimate.wordpress.com/2014/05/10/how-lessons-from-space-put-the-greenhouse-effect-on-the-front-page/

        • Chris: If you believe NOAA for left wing Asheville NC the actual dT based on ave T is 0.6C to date. Meanwhile CO2 has allegedly risen from around 316 to 417 ppm. So extrapolating 100 yrs based on the alleged data we arrive at the yr 2074….dT +1.2C and dCO2 +64% vs the low prediction of 1.5C @ 630 ppm CO2. Not sure if that is mo’ better or mo’ worser.

    • At least Covid 19 modellings will be proved or disproved within a year or two – unlike climate modelling where we must wait 100 (or perhaps even 1000) years.

      Watch this space.

  3. How does convince ignorant politicians and avaricious media reporters of the lack of reliability of these lousy computer models?

  4. A “computer model” (or just “model”) is a computer program which simulates (“models”) real processes for the purpose of predicting their progression. The utility and skillfulness of models is dependent on:

    1. how well the processes which they model are understood,

    2. how faithfully those processes are simulated in the computer code, and

    3. whether the results can be repeatedly tested so that the models can be refined.

    Models which try to simulate reasonably well-understood processes, like PGR and radiation transport, are useful, because the processes they model are manageably simple and well-understood.

    Weather forecasting models are also useful, even though the processes they simulate are very complex, because the models’ short-term predictions can be repeatedly tested, allowing the models to be validated and refined.

    But more ambitious models, like GCMs, which attempt to simulate the combined effects of many poorly-understood processes, over time periods too long to allow repeated testing and refinement, are of dubious utility. (Worst of all are so-called “semi-empirical models,” which aren’t actually models at all.)

    • Computer models are the personal opinions of the people who own and program the models.

      If the underlying climate physics is not known in detail, which is true today, then the result is computer games that makes wrong predictions.

      And that’s what we have now.

      Climate computer games to “support” the 50 year old claim of a coming climate crisis, that is always coming, but never arrives.

  5. I came up with a wonderful idea and asked a friend to look it up on the computer to see what ,or if.it was original the response was that I would make millions as it was a game changer, after spending more than I should have I am after twelve years waiting for my first order.

  6. Clear review and explanation.
    Throughout history governments have used “experts” to add impact to their policy announcements.
    In Ancient Greece it was the authority of the Oracle at Delphi.
    The “trick” was to guess what the official wanted to hear.
    Today such authority and advice rests upon computer “models”.

    • The climate computer gamers may be 100 percent wrong but the mainstream media avoids reporting that fact.

      So, if 95 percent of the mass media will not report errors of past predictions, then almost no one will ever know.

      How can the modelers be “wrong” if almost no one knows how wrong they have been?

  7. Would you take a flight in the “renewed” Boeing 737 MAX confirmed operational with a computer model with the quality and stated confidence of accuracy as the present CC models?
    Would you buy or even ride in an autonomous automobile confirmed operational with a computer model with the quality and stated confidence of accuracy as the present CC models?
    Would you allow a Nuclear Power Plant to be built within 10 miles of your home confirmed operational and accepted by the NRC with computer models with the quality and stated confidence of accuracy as the present CC models?
    Fifty years in Nuclear engineering and i have never seen the acceptance of any computer model “blessed” by any standards organization (e.g. ASME, NRC, IEEE, ASTM, NFPA, etc.) for use in any aspect of the design of a nuclear power plant, Coal power plant, NG or Oil pipeline with the absurd level of non confidence as these CC Models. I spent over a year running performance tests of operational actions, events and design basis accidents and comparing the output results with the actual results of the actual plant the code was written for.
    This was a daily event. Run the program for “loss of coolant pump. compare the results with the actual plant data for loss of that coolant pump. find out why the computer output did not match. And repeat. It would take weeks of rewriting code so that the model could simulate a run at steady state with no operator interaction for a 24 hour run without drifting off out of the acceptable tolerance band. Since the accepted tolerance allows about 1/10 of a percent slop, how far off will the plant be 100 years from now – with no operator action for 100 years? How do they do this with this trash they call Climate Change Models? And I knew EVERY parameter and the charismatics of every parameter need to be considered in the model’s computer code.
    All the CC prognosticators writing this code have are assumptions and no knowledge of even what they know or don’t know! Why is this allowed?

        • You have empirical evidence of this claim?
          (Or did you just pull it out of your a55, like a lot of the assumptions that are fed into climate models?)

          • Mosher agrees with the dogma, Willis doesn’t. To those who are trained to only think what they are told to think, agreeing with the dogma makes one smart.

          • One thing can be said, Mr. Pool can construct a clear sentence with proper grammar and punctuation.

    • We gnerally prefer to simply refer to him a Willis here at WUWT.
      But hey, if you think Willis is the Second Coming… who’s the crazy one?

    • Steve, I gave a very early prediction, hedged about on all sides with statements saying that it was extremely uncertain because it was so early. I said:

      Finally, let’s take a look at the deaths in South Korea. It’s still early, deaths are still happening, so this will be more uncertain.

      and

      Although the uncertainty in this one is greater, it looks at present like the final total of deaths in South Korea will be on the order of one hundred, give or take.

      (I just re-read that post. I was surprised at how much credit I gave you in the comments for your knowledge of the epidemic … and I wouldn’t change that one bit. But I digress …)

      As quoted above, I gave an “on the order of” estimate of 100 deaths in Korea. I said, not once but twice, that it was a very uncertain estimate.

      To date, there have been 282 deaths.

      So my questions are …

      1. I clearly stated it was an early and very uncertain prediction. TWICE. Every time you bring this up, you somehow forget to mention that. So I quoted it once again above, and I ask, why is your retelling always so selective?

      2. With such an early prediction, most folks are happy if the result is in the right order of magnitude, which mine was. And yes, I’m happy with it. So I ask, what kind of accuracy were you expecting?

      3. My most important question. What is the source and nature of the hole in your soul that causes you to bring this same trivia up yet another time, and makes you selectively forget yet another time parts of what actually happened, in a seemingly endless unsuccessful attempt to bite my ankles?

      4. Is there some kind of statute of limitations on your madness? Are you going to be still whining about this in ten years?

      5. I did MUCH better with my Korea prediction than say the experts did who at the time were predicting 2.2 million deaths in the US, then dropped their prediction to 60,000 deaths … and their mistake was hugely consequential, while mine made no difference. So … how come you’re ragging on me and not them?

      Your monomania got so bad over on Twitter that I ended up blocking you. I hated to do that, and I hardly do that to anyone. But all you did was endlessly attack me about this early projection I made about Korean deaths. Why on earth is this early prediction, a prediction that I said TWICE at the time was very uncertain, so important to you? I predicted a hundred, it’s two eighty.

      SO FREAKIN’ WHAT?

      Truly, Steve, it’s reached the level of creepy internet stalking. You’re totally destroying your reputation with this bizarre obsession, and I hate to see you do it because of how much I’ve learned from you. But this is way out of hand. I implore you, please stop this endless cycle. You are publicly self-immolating, and it’s painful to watch anyone do that, much less someone I call a friend.

      Sincerely and sadly,

      w.

      • “The Gompertz Curve estimates that the final total will be on the order of some 8,100 cases or so. ”
        ..
        Current World-o-meter value: 12,800, and still rising, you’re off by 50%

        • Henry, is there some part of “early uncertain projection” and “order of magnitude” that’s unclear to you? If so, I’m happy to go over them again.

          w.

          • Henry’s the type who would argue over the meaning of the word “is”. Especially if it will distract from the real discussion.

          • Projecting an endpoint that is 50% off of reality means the “model” you used sucks….which qualifies as a “train wreck.”

          • Like most warmistas, Henry has no concept of what error margins are or how to work with them.

      • Enjoyed the interview a great deal……..thanks.

        Willis,
        Your defense of the Korean model is convincing but not necessary for the minds of objective scientists, of which your critic is not.

        Trying to model the outcome for an extraordinarily contagious and unique virus never witnessed before, using a tremendous number of unknowns, including unpredictable human behavior under various environments and restrictions is going to be based on so many speculative assumptions that it makes no sense to hold somebody accountable for using assumptions that were reasonable based on the best information out there at the time.

        It appears that Steve is doing so, not based on reasonable expectations………or being objective but is expressing himself this way because of a psychological/character deficiency.

        Is his objective to give the impression/let us know that his modeling and other skills are superior to yours/others?
        It would appear so and that he does see himself as superior.

        Unfortunately, even brilliant people that communicate this way, when they don’t provide substantive evidence to support their snarky, drive by attacks…………are purely acting as trolls.

        A trolls objective is NOT to make positive contributions on a forum. Instead, they try to disrupt positive communication between others and at times, acting as predators to hurt others……..again, entirely because of their own character flaws.

        I have been moderator for MarketForum for the past couple of years:
        https://www.marketforum.com/

        I had to ban several trolls for neurotically posting mean personal attacks(far worse than Steve’s) and defiantly refusing to stop after numerous respectful warnings and opportunities to stop doing it.

        I understand why they could not stop too. It’s a character/personality flaw that defines who they are.

        Which is why Steve is destined to continue with this type of posting. Not breaking any rules or attacking with enough gusto to ever justify being banned but constantly making pot shot comments that intentionally disrupt discussions, resulting in others reacting = the reward for the troll who gains pleasure out of this.

        So Willis, your Korean model may have been understandably flawed………but its over and done and you have moved on in your life to focus on positive things.
        Stevens character flaws, however continue to define many of his trolling posts here and show no sign of him moving on to eliminate that style…….. which would make him more of a trusted source here vs his current status……..a troll that revels in taking pot shots at people to upset them.

      • Willis

        Could you explain what the purpose of this post was

        https://wattsupwiththat.com/2020/03/13/the-math-of-epidemics/

        If it was to simply tell us that the total number of cases ( or deaths) will follow a Gompertz-like curve over time – then fine. I agree and so do all epidemiologists. Eventually the virus runs out of susceptible cases and the dwindling number of infected individuals is less likely to come into contact with the few remaining.

        This happens regardless of whether the infection is allowed to progress freely or it has been mitigated in some way.

        Your post, though, provided ‘expected values ‘ from the Gompertz model. These numbers are likely to be wrong – and possibly very wrong. South Korea has effectively suppressed the number of cases (and deaths) by rigorous testing and isolation (perhaps Steve M could confirm this). The epidemic in SK has been slowed significantly but it isn’t over. They are still dealing with local outbreaks

        Your ‘prediction’ of 8048 cases was made when 7362 cases had been recorded. From this we can only assume that you thought the epidemic was nearing completion. It wasn’t – and isn’t.

    • “…Jesus willis’s model for Korea was a train wreck .

      he has no business talking…”

      That post is a train wreck.

      “On the order of 100” vs 282 to date for deaths, over 8,000 confirmed cases vs under 13,000 to date…and Willis was probably working with undercounts at the time which affected the overall accuracy. Wow, such a train wreck when the population of South Korea is approaching 52 million.

      Even an English lit major should know better.

        • +/- 50% can be good, or bad, depending on the problem being worked and the quality of the data.

          Anywho, the climate models can’t even get it that close.

          • Any model that is 50% off needs work. 25% might be acceptable, but geezzz…..50% isn’t even “skillful.”

          • Considering that the context was of a tentative forecast, no pretense of a reliable model, it truly is insane of certain people to trash Willis E. over such a thing.

            So far, I count just two people who’ve posted here as buying into this nonsensical ad hom? I’m sure you folks know who you are, and wouldn’t it be beneficial to you to make yourselves over into reasonable and perceptive individuals somehow?

      • Not just that, the detracters (Mosher and Pool) are at the same time totally satisfied and in awe of the 300% over- warming forecast of 30 years ago. Henry seems to have had a theory of his rebuffed here (he doesn’t talk about it anymore, just snipes at commenters). Mosher is in Korea and seems to feel some strange possessiveness over things Korean. Certainly Willis’s estimates have been far better than any others I’ve seen (show me where I’m wrong).

        The ‘experts’ in things Climate and Covid are the train wreck. Both synods know it now, too. That’s the source of the current end is nigh hysterics in Climate. Nature has slapped them up the head so often that their only fear is nothing special is really going to happen except greening of the planet and bumper harvests.

        • Gary, re: their only fear is nothing special is really going to happen except greening of the planet and bumper harvests….

          So true, and thus, we are now blessed with TV19.

    • I enjoy reading Willis’s posts but I agree. Michael Levitt (the Nobel prize winner) produced a similar Gompertz curve for SK with the inferred conclusion that cases would peak at around 8k.

      This is not a natural curve but one which has been suppressed by mitigation.

  8. Climate models appear to under estimate the solar effect. Even when the current minimum is over it may be some time (more likely years than months) to show up in the global climate data, if someone is able to decipher it.
    For time being with the latest June data the SC24 minimum is still ongoing, exceeding the duration of the last one
    http://www.vukcevic.co.uk/SSN-23-24-min.htm

      • The wobble and gyrations of Earth’s orbit contribute to much more variability than the TSI.

        • TSI is not the only way the sun influences climate.
          But then, you’ve been told that before, and no doubt will have to be told it again in the future.

      • “From what I’ve been told, all of the models assume that the sun is a constant.”

        The models I’ve looked at make that assumption, but do take into account the variation of total solar insolation as function of time. Due to the fact that the Earth is in an elliptical orbit around the Sun, TSI varies by a total of 6.5% over the course of each and every year. In the frame of reference typically used for the Earth’s solar energy balance, that’s 22 W/m^2. The variability of the Solar constant is estimated to be 0.1%, with a period of 30 years. The annual variation overwhelms that.

        Willis makes great points with respect to validation and verification. I’ve been looking at the documentation for the Max Planck Institute for Meteorology’s ECHAM5 General Circulation Model, and found a rather startling false statement. The model does account for the changing distance of the Earth from the Sun over the course of each year by numerically solving Kepler’s equation. The algorithm used is Newton’s method, which the report caveats as follows: “This iterative solver does converge for most initial values, but not for all. This has been taken into account. For more details see Meeus (1998).”

        I happen to know a great deal about Kepler’s equation and its many, many solutions.
        Newton’s method converges for any initial guess as long as the eccentricity is less than 1, a fact for which mathematical proof exists. Why the authors of the model think otherwise is a mystery. I haven’t yet seen the reference they cite (Meeus 1998), but I have it on order from the publisher. More later.

        The use of Newton’s method for solving Kepler’s equation for Earth’s orbit around the Sun is humorous in and of itself, in that it exhibits how out of their depth the model’s authors are when it comes to simple things. But I can’t help wondering why they think that the method doesn’t converge for some initial guesses. Did they experience divergence in some of their runs? If so, the only explanation is a coding error. I have written codes requiring the same solution, both for the Earth-Sun system, and for highly eccentric Earth orbiting satellites. An iterative solver isn’t even warranted for the Earth-Sun system – the first few terms of the closed-form Fourier series solution (yes, there is one, with Bessel function coefficients) provides as much accuracy as required with less computational overhead, since the eccentricity of Earth’s orbit is so small. But if one didn’t want to go to all that effort (small as it is), the simplest iterative solver is successive substitution – and that also converges for any initial guess.

        Solving Kepler’s equation in order to get Earth-Sun distance and, thus, Solar intensity versus time is a very, very trivial problem, especially compared to the mind-numbing complexity of the computational fluid dynamics in the rest of the ECHAM5 model. If they had a problem with the former, how many problems are associated with the latter?

          • Pat, I’m probably going to do so. This is part of an informal project on my part to go through one entire climate model, and find any flaws. So far, the one I’m vetting provides a target rich environment.

        • “If they had a problem with the former, how many problems are associated with the latter?”
          Who exactly is “they”. There is division of labor here. The solution of Kepler’s equation was probably assigned to a junior, maybe a student. He produced, and documented something that worked, and required negligible time. That’s all they need.

          • “They” are the team that put the code together. The solution of Kepler’s equation was no doubt assigned to someone who didn’t know what he or she was doing, and neither you nor I have any idea whether what was produced works. On the very, very slim chance that the reference asserts that Newton’s method has any convergence problems applied to Kepler’s equation, I am withholding an out and out statement that the code and its documentation are wrong. I haven’t yet seen their reference, but numerous others with which I am very familiar state and prove the convergence properties. If the book they reference says that the method doesn’t converge for all initial guesses, the book is wrong, but at least they have a reason for stating it. Until I see the book, I won’t know whether it is wrong, or whether the code team made an error that produced diverging solutions. If it was the latter, and they “addressed” the matter in way that caused the method to converge (or, rather, terminate), that is no guarantee to me that it converges to a correct answer. I’ve seen plenty of instances of non-linear solvers that purport to arrive at a solution, when in fact all they have done is reach an iteration limit while finding an argument value nowhere close to the right one.

          • Mich: compressible flow in pipes travelling at near sonic velocity can corn-fuse computers to this day. That and a bunch of other stuff. The user of the tool is the most important component in arriving at an interpretation that at least points toward fact. Without experience it simply becomes yet another TV19 show.

        • Mike, nice to see some classic science. Thanks, and yes it would get a welcome here as an article, particularly in the context of climate model inputs.

          BTW, Willis also examined that 6.5% swing in putative TSI difference between aphelion and perihelion and guess what? It couldn’t be detected in satellite-based (nor in land based thermometry, I believe). He was investigating the common idea of correlation between solar and surface temperature cycles in the surface T data and found no such relation. He then went to the 6.5% events and ditto!

          Why would that be? I’m convinced it arises from the Le Châtelier Principle, of which most outside of the chemistry field seem to be unaware. The idea is that when a multicomponent system of composition, volume, pressure, temperature … is perturbed by a change in one of its components, the system reacts to resist the change. I’ve commented on this at large before and suggested that, unsuspected by Le Châtelier himself, it is a far reaching natural Law. See even Wiki on this.

          https://en.m.wikipedia.org/wiki/Le_Chatelier%27s_principle

          I’m convinced that this is really the complete bundle of negative feedbacks that are not accounted for and should be a term in the climate formula re forcings and T. My estimate of the new term from gross exaggerations of the warming forecasts of 300%, is 0.33.

          • Gary: It’s even simpler than that, it’s called inertia: indisposition to motion, exertion, or change…it’s like most scientists and government workers living off of other folks taxes.

    • A paper came out a few days ago indicating SC25 will be one of the strongest in the last couple of centuries. The current Solar min may not be so grand.

  9. You have the same issue with “satellite data”, where the term satellite suggests the data will be sacred. Actually putting something into space helps very little with the nature of human error, or moral hazard respectively.

    The cloud radiative effect (CRE) is an incredibly important foundation of the GHE theory. The basic logic is clouds cool the planet, and then it takes GHGs to warm it back up to the temperatures we observe. Accordingly it has been “predicted” (as a conditio sine qua non) by early alarmists. Later it was “confirmed” by satellite data.

    However, it is actually not about satellite data, rather these are models developed by the same people (or their affiliates) who predicted the negative CRE, which are then fed by some data taken from satellites. Mostly the model output depends on assumptions. As history shows, the results vary substantially and contradict each other, just because there have been alterations in the assumed parameters.

    A careful investigation of the subject, this time based on real data, which are weather records, without the use of “models”, reveals it is all wrong after all. The CRE is indeed positive and the GHE theory lacks its most significant foundation.

    https://www.docdroid.net/phJh2cU/the-strange-nasa-map1-doc

  10. Maybe “calamity” should be the group term for a collection of models. So we would have a flock of birds, a school of fish, a parliament of owls, a murder of crows and … a calamity of models!

    • Care to actually demonstrate where the label is innaccurate? Or are you going to pull a Mosher and just assume that everyone will be impressed because you actually seem to know what the word means?

    • Henry Pool June 30, 2020 at 11:17 am

      LOL polymath

      Well, let’s take a look at that claim:

      pol·y·math
      /ˈpälēˌmaTH/
      noun
      a person of wide-ranging knowledge or learning.
      “a Renaissance polymath”

      Yeah, that would be me. Among many other jobs, I’ve made my living as a boatbuilder, as an accountant, as a musician, as an artist, as a computer programmer, as a massage therapist, as a smuggler, as a commercial fisherman, as a small businessman, as a commercial diver, and as the Chief Financial Officer of a company with $40 million in annual sales.

      If it were just those, I think I’d qualify as a polymath … but there are plenty more. I was the piano man in a Manila whorehouse/bar, playing for beers and tips, not many people have that in their history. Here’s my CV, makes for an interesting read while you’re sheltering in place.

      w.

      • Will: been thar, dunnit but meanwhile Mr. Pool’s spent a lot of time chasing photon’s around. I think we all git dizzy after a short or long while…meanwhile sheltering in space, probability waves always at the mercy of the house.

      • “Yeah, that would be me.”
        ..
        ..
        “but there are plenty more”

        Mr. Eschenbach, a “jack of all trades” is a master of none. A polymath is a master at many.

      • A “stable genius” would not claim that they are one. Ditto for a polymath. Trying to convince people you are a polymath betrays your insecurity.

        • Willis has actually demonstrated his many skills and extensive knowledge over years of posts here at WUWT, Henry.

          You, in contrast, have merely demonstrated a refractory contrarian ignorance in defense of AGW, occasionally entertaining yourself up with a splash of fatuous mockery.

          View offered in a caring and nurturing way, with a nod to Dame Edna.

        • Could you give us a link defining “Henry Pool”. I’m convinced the second term, “pool”, is the defining one – it all wet!

          • No link needed, woz. Just look up the terms “butt-hurt” and “Trump Derangement Syndrome” You’ll see Henry’s picture. It will look like he’s constipated.

  11. Anthony, Willis
    Thanks for providing this very informative discussion. I find it easier to listen than to read at times, and this one is a gem, and will forward the link to others, some may not want to listen.

    One of the greatest areas of new discovery will be atmospheric connections, an area where I have spent a lot if time investigating. Bringing multiple data together.
    Regards
    Martin

  12. Many years ago we tried to improve the models and ended up in in disproving them (see https://gmd.copernicus.org/articles/9/4097/2016/ ). We used the re-analysis datasets as a “best available observational set” and compared that directly to 12 of the best CMIP5 models. Looking at the results in the paper show that very few, if any, of the models show even qualitatively the behavior of reality. We never got even a press release of our work out into the public…
    Regards Johan

  13. Has anyone carried out a study to determine how the Western World will cope with energy supplies should we be unfortunate to encounter, in the near future, another Little/Mini Ice Age, because as I see it everything seems to depend on the wind blowing, possibly rivers still flowing and, of course, the sun shining? I am not too sure about the current ‘all the eggs in one basket’ approach (and everything will be hunky dory)!

  14. Willis,
    From what I understand the models initializing data, at the start of a run, is not even the observed current conditions of the climate (at the time) but are a mishmash of of mostly adjusted data and biased guesses.
    ~~~~~~~~~
    As one of the top climate scientists in the world, Kevin Trenberth said in journal Nature (“Predictions of Climate”) about climate models in 2007:

    None of the models used by the IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice and soil moisture has no relationship to the obsered state at any recent time in any of the IPCC models. There is neither an El Nino sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus oceanic currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forest for the next decade from Brazil to Europe. Moreover, the starting climate state in serveral of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.
    ¯

    ¯
    Therefore the problem of overcoming this shortcoming, and facing up to initializing climate models means not only obtaining sufficiently reliable observations of all aspects of the climate system, but also overcoming model biases. So this is a major challenge.

    I take it that little has changed since Kevin Trenberth made this statement.

  15. The so called climate models have become of little interest to governments and AGW activists, including the United Nations and its offspring. The anthropomorphic CO2 global warming catastrophe has become axiomatic despite the obvious and serious flaws, and the terrible consequences. The UN Sustainable Development Goals are powering this train wreck.

    However, Panic2020 has destroyed so much wealth and damaged so many aspects of western society (businesses, schools, health care, pensions, and on & on) that most people have more serious and immediate problems.
    There are images of cats hanging on to a wall, rope, or branch.
    Check your picture dictionary under “global warming.”

  16. Many in the public automatically put faith in fancy graphs without questioning the process and evidence that went into the calculations that produced the graph. Generally the fancier the graph the more likely people believe it. Add in the words produced by such and such institute or university and the public believes the graph even more.

  17. I find it surprising that a polymath would be on the wrong side (3%) of science. Pay attention to real scientists.

    • Its not “Pay attention to real scientists” its pay attention to the science. All this sniping at models is just an intentional distraction from the observed science. Oh look, the hockey stick got debunked again. /sarc

      “The multi-proxy database includes a total of 1319 paleo-temperature records from 470 terrestrial and 209 marine sites where ecological, geochemical and biophysical proxy indicators have been used to infer past temperature changes.”
      Published: 30 June 2020 https://www.nature.com/articles/s41597-020-0530-7

      • There is no science that shows that CO2 is responsible for most, much less all of the current warming.

        BTW, I know you aren’t smart enough to figure this out, but just showing that it has warmed, is not proof that CO2 caused it.

    • The 97% claim has been disproven so many times, that only a climate scientist would be dumb enough to still believe it.

    • Henry P. Three percent is probably right. Dissidents in the Soviet Union numbered fewer, although the silent ones probably made it up to 3%. It was dangerous there to be a dissident.

      It isn’t dangerous here yet, but it’s been trending that way. Recall the Hundred Scientists against Einstein. There the dissident was 1%! Numerate and literate dissidents generally come from the right hand tail of the IQ curve. Courage of course is a factor. It’s easy to go with the current and no hard thinking is required.

  18. Turns out, it was hugely flawed, and the code was a train wreck.

    This obsession with the quality Neil Fergusons coding is a red herring. Ferguson used some basic assumptions and this led to certain conclusions. The key assumptions were

    Population were 100% susceptible to infection. (debatable but no definite proof either way)
    R0 was about 2.5 (about right – higher in some locations)
    Population was largely – not totally – homogeneous (possibly underestimated heterogeneity)
    IFR was 0.9% (a bit high)

    Under these assumptions, 81% would eventually become infected (Correct- similar to Nic Lewis model without heterogeneity). Now Forget modelling – just do the sums

    UK population – 67 million
    81% infected – 54.3 million
    IFR 0.9% – ~490k deaths

    The model is irrelevant. Basic assumptions & simple arithmetic lead to a death toll very similar to Ferguson’s. By all means, question the assumptions but stop being sidetracked by models.

  19. Read up on the early history of IBM, the “counting engines” and how they were used to promote all manner of political agendas – including in Nazi Germany but also the US.
    As those of us in the compute industries know – models are all about the beliefs of the creator(s). Testing is how you reconcile reality with belief.

  20. As a result, Neil Ferguson’s COVID-19 model could be the most devastating software mistake of all time.

    I followed the link to the Telegraph article by David Richards and Konstantin Boudnik and was very disappointed. After touting their credentials as software designers and developers they then state:

    Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and, guess what, was the code used for Mariner 1.

    Which is not what I had read previously and which was then retracted by this note at the end:

    Clarification: Imperial College has asked us to make clear that its Covid modelling code is not written in Fortran but in C and that it has been applied in a way that is both deterministic and reproducible. It says it is only one of many pieces of evidence/advice on which the Government relies.

    So somebody apparently checked the facts with Imperial College.

    But the disappointing thing is I have to conclude these two highly-credentialed software experts clearly did not actually look at the code, which means they are simply repeating what they’ve heard or read elsewhere. Indeed there is in the first Telegraph article a link to a second, which states:

    We now know that the model’s software is a 13-year-old, 15,000-line program that simulates homes, offices, schools, people and movements. According to a team at Edinburgh University which ran the model, the same inputs give different outputs, and the program gives different results if it is run on different machines, and even if it is run on the same machine using different numbers of central-processing units.

    There is no link to an actual report from Edinburg University.

    What I had read previously from a Google engineer was the only thing released so far has been a rework of the original C code into C++ by Microsoft engineers hastily brought in by Imperial College. So far as I know, nobody outside of Imperial Collete has seen the original code. It might be as bad as these articles claim, but we don’t positively know that (it could of course be much worse).

    This brings me to my real point. I’m not bashing Neil Fergusson or Imperial College here. Academics simply do not work in the the same kind of accountability and liability environment that industry does and their working practices reflect this. The real fault is with the government officials who put force of law behind an unverified computer model from a researcher with a less than spotless prior record.

    Another complaint: where were our intelligence agencies? I’m sure they’re monitoring all kinds of potential threats to national security – military, economic and others. I’m sure they have plenty of people qualified to do detailed code audits, but they don’t need to go that far. It would be a two-hour job for an agency researcher to assemble the complete track record of Fergusson’s previous predictions. We’ve put 40 million people out of work based on model output that nobody every audited. That’s not Fergusson’s fault, even if his model is crap.

    • Alan: the codes mentioned were in use I believe back in the 80’s. In nuclear plant design we were routinely required to validate reproducible outputs for given inputs as part of a project. I would guess that TV19 transmission is a relatively simple problem to analyze when compared to what engineers have to deal with routinely not only at the nukes but many other industries where the outcome of analysis can be the difference between life and death/profit and loss.

      What’s going on is a crock.

  21. One of the most important characteristics and virtues of consensus climate models is uncertainty. Paradoxical: because critics often cite model uncertainty as a big defect of climate models! Not so.
    1. By admitting to uncertainty, models dispense with any need for quality control or traditional scientific testing and validation. It lets them do what they want.
    2. Uncertainty also gives modellers vast scope to add kludges to their models; especially kludges leading to positive feedbacks. Again: it lets them do what they want.

    Big difference between climate models and any other models is how similar all climate models are while claiming to ‘different‘ (there are over 100 of them). All these ‘different’ models crib basic radiative forcing ideas from Manabe and Wetherald (1967) and Held and Soden (2000). So the different models are all actually minor variations on the same model! This is clearly a carefully policed operation, because other (actually different) models of the greenhouse gas effect have been written such as those by: Miskolczi, David Evans, Dai Davies, Nasif Nahle, …

    • Mark: Good points…in business when formulating and testing thermal performance guarantees we have to take uncertainty into account, and Performance Test Codes provide standard means of calculating and applying uncertainty. Failure to understand and properly apply the PTC can lead to significant loss of money in the form of Liquidated Damages.

      Do the other climate models you mention give more realistic answers? Do they quantify their uncertainty? Are they useful for hind casting?

      Or just another bunch of gigo?

Comments are closed.