The Climate Model Muddle

The Climate Model Muddle


Guest post by Ed Zuiderwijk

This is a posting about the epistemology of climate models, about what we can learn from them about the future. The answer will disappoint: not much. In order to convince you of the veracity of that proposition I will first tell you a little story, an allegory if you want, regarding a thought experiment, a completely fictitious account of what a research project might look like, and then apply whatever insight we gained (if any) to the climate modelling scene.

A thought experiment

Here’s the thought experiment: We want to make a compound that produces colour somehow (the mechanism how it does that is not really relevant). However, we specifically want a well-defined colour, prescribed by whatever application it is going to be used for. Say a shade of turquoise.

Now, our geologist and chemistry colleagues have proposed some minerals and compounds that could be candidate materials for our colourful enterprise. Unfortunately there is no information whatsoever what colours these substances produce. This circumstance is compounded by the fact that the minerals are exceedingly rare and therefore extremely expensive while synthetic ones are really difficult to make and therefore even more pricy. So, how do we proceed, how do we find the best compounds to try? Getting a sample of each of the many compounds and testing each of them for the colour it produces is out of the question. Therefore, what we do is to use modelling of the physics of the colour-producing process for each of the proposed compounds in order to find those which render turquoise, if there are any. Sounds straightforward enough but it isn’t because there are several different codes available, in fact 5 in total, that purport to do such a simulation, each with their own underlying assumptions and idiosyncrasies. We run these codes for the proposed compounds and find that, unfortunately, the colours they predict are inconsistent for individual compounds and generally all over the place.

For instance, take the compound Novelium1. The predicted colours range from yellow-green to deep violet with a few in between like green, blue or ultramarine, a factor 1.3 range in frequency; similar for the other candidates. In this situation the only way forward is doing an experiment. So we dig deep into the budget and get a sample of Novelium1, and see what colour it actually produces. It turns out to be orange-red which is pretty disappointing. We are back where we started. And because of our budgetary limitations we are at the point of giving up.

May we here introduce a member of our team. Let’s call him Mike. Mike is a bit pushy because he fully realises that were we to succeed in our aim it would get us some prestigious Prize or another, something he is rather keen on. He proposes to do the following: we take the model that predicted the colour closest to the actual one, that’s the model that gave us yellow-green, and tweak its parameters such that it predicts orange-red instead. This is not too difficult to do and after a few days jockeying on a keyboard he comes up with a tweaked model that produced the observed colour. Alacrity all around except for one or two more skeptical team members who insist that the new model must be validated by having it correctly predict the colour of compound Novelium2. With that Prize riding on it this clearly is a must, so we scrape the bottom of the budget barrel and repeat the exercise for Novelium2. The tweaked model predicts yellow. The experiment gives orange.

We gave up.

What does it mean?

Can we learn something useful from this story? In order to find out we have to answer three questions:

First, what do we know after the first phase of the project, the modelling exercise, before doing the experiment? Lamentably the answer is: nothing useful. With 5 different outcomes we only know for certain that at least 4 of the models are wrong but not which ones. In fact, even if the colour we want (turquoise) shows up we still know nothing. Because how can one be certain that the code producing it is the ‘correct result’ given the outcomes of the a priori equally valid other models? You can’t. If a model gave us turquoise it could just be a happy coincidence when the model itself is still flawed. The very fact that the models produce widely different outcomes tells us therefore that most probably all models are wrong. In fact, it is even worse: we can’t even be sure that the true colour produced by Novelium1 is inside the range yellow-green to violet, even if there were a model that produces the colour we want. In the addendum I give a simple probability based analysis to support this and subsequent points.

Second, what do we know after the unexpected outcome of the actual experiment? We only know for certain that all models are wrong (and that it is not the compound we are looking for).

Third, why did Mike’s little trick fail so miserably? What has happened there? The parameter setting of the original un-tweaked model encapsulates the best understanding – by its makers, albeit incomplete but that’s not really relevant – of the physics underpinning it. By modifying those parameters that understanding is diluted and if the ‘tweaking’ goes far enough it disappears completely, like the Cheshire Cat disappears the more you look at it. Tweaking such a model in hindsight to fit observations is therefore tantamountto giving up the claim that you understand the relevant physics underlaying the model. Any pretence of truly understanding the subject goes out of the window. And with it goes any predictive power the original model might have had. Your model has just become another very complex function fitted to a data set. As the mathematician and physicist John von Neumann once famously said of such practice: ‘with four parameters I can fit an elephant, and with five I can make him wiggle his trunk’. The tweaked model likely is a new incorrect model that coincidently produced a match with the data.

An application to climate models

Armed with the insights gleaned from the foregoing cautionary tale we are now in a position to make some fundamental statements about IPCC climate models, for instance the group of 31 models that form the CIMP6 ensemble (Eyring et al, 2019; Zelinka et al, 2020). The quantity of interest is the Equilibrium Climate Sensitivity (ECS) value, the expected long-term warming after a doubling of atmospheric CO2 concentrations. The predicted ECS values in the ensemble span a range from 1.8C at the low end to 5.6C at the high end, a whopping factor 3 in range, more or less uniformly occupied by the 31 models. Nature, however, may be cunning, even devious, but it is not malicious. There is only one ‘true’ ECS value that corresponds to the doubling of CO2 concentration in the real world.

Can we make any statement about this ensemble? Only these two observations:

First, most probably all those models are incorrect. This conclusion follows logically from the fact that there are many a priori equally valid models which can not be simultaneously correct. At most only one of these models can be correct, but given the remaining 30 incorrect models the odds are against any model at all being correct. In fact it can be shown that the probability that none of the models is correct can be as high as 0.6.

Second, we even cannot be sure that the true ECS is in the range of ECS values covered by the models. The probability of that being the case is 1.0-0.6=0.4, which means that the odds that the true ECS is in the range covered by the models are roughly 2 to 3 (and thus odds on that the true ECS is outside the range). The often made assumption that the ‘true’ ECS value must be somewhere in the range of outcomes from the models in the ensemble is based on a logical fallacy. We have absolutely no idea where the ‘true’ model – number 32, the ‘experiment’ – would land, inside or outside the range.

There are some qualifications to be made. What, for instance, does it mean: the model is ‘incorrect’? It means that it could be incomplete — there are concepts or principles missing in it that should be there — or, conversely, over-complete — with things that are but should not be there — or that there are aspects of it which are just wrong or wrongly coded, or all of those. Further, because many models of the ensemble have similar or even identical elements one might argue that the results of the ensemble models are not independent, that they are correlated. That means that one should consider the ‘effective number’ N of independent models. If N = 1 it would mean all models are essentially identical, with the range 1.8C to 5.6C an indication of the intrinsic error (which would be a pretty poor show). More likely N is somewhere in the range from 3 to 7 – with an intrinsic spread of, say, 0.5C for an individual model – and we are back at the hypothetical example above.

The odds of about 3 to 2 that none of the models is correct ought to be interesting politically speaking. Would you gamble a lot of your hard-earned cash on a horse with those odds? Is it wise to bet your country’s energy provision and therefore its whole economy on such odds?

Hindcasting

An anonymous reviewer of one of my earlier writings provided this candid  comment, and I quote:

‘The track record of the GCM’s has been disappointing in that they were unable to predict the observed temperature hiatus after 2000 and also have failed to predict that tropopause temperatures have not increased over the past 30 years. The failure of the GCM’s is not due to malfeasance but modelling the Earth’s climate is very challenging.’

The true scientist knows that climate models are very much a work in progress. The pseudo scientist, under pressure to make the ‘predictions’ stick, has to come up with a way to ‘reconcile’ the models and the real world temperature data.

One way of doing so is to massage the temperature data in a process called ‘homogenisation’ (e.g. Karl et al, 2015). Miraculously the ‘hiatus’ disappears. A curious aspect of such ‘homogenisation’ is that whenever it is applied the ‘adjusted’ past temperatures are always lower, thus making the purportedly ‘man-made warming’ larger. Never the other way around. Obviously, you can do this slight of hand only once, perhaps twice if nobody is watching. But after that even the village idiot will understand that he has been had and puts the ‘homogenisation’ in the same dustbin of history as Lysenko’s ‘vernalisation’.

The other way is to tweak the model parameters to fit the observations (e.g. Hausfather et al., 2019). Not surprisingly, given the many adjustable parameters and keeping in mind von Neuman’s quip, such hind-casting can make the models match the data quite well. Alacrity all around in the sycophantic main-stream press, with sometimes hilarious results. For instance, a correspondent for a Dutch national newspaper enthusiastically proclaimed that the models had predicted correctly the temperatures of the last 50 years. This truly would be a remarkable feat because the earliest software that can be considered a ‘climate model’ dates from the early 1980s. However, a more interesting question is: can we expect such a tweaked model to have predictive power, in particular regarding the future? The answer is a resounding ‘no’.

Are climate models useless?

Of course not. They can be very useful as tools for exploring those aspects of atmospheric physics and the climate system that are not understood, or even of which the existence is not yet known. What you can’t use them for is making predictions.

References:

Eyring V. et al. Nature Climate Change, 9, 727 (2019)

Zelinka M. et al. Geophysical Research Letters, 47 (2020)

Karl T.R., Arguez A. et al.  Science 348, 1469 (2015)

Hausfather Z., Drake H.F. et al.  Geophysical Letters, 46 (2019)

Addendum: an analysis of probabilities

First the case of 5 models of which at most 1 can possibly be right. What is the probability that none of the models are correct? All models are a priori equally valid. We know that 4 of the models are not correct, so we know at once that the probability of any model being incorrect is at least 0.8. The remaining model may or may not be correct and in the absence of any further information both possibilities could be equally likely. Thus, the expectation is that, as a matter of speaking, half a model (of 5) is correct, which means the a priori probability of any model being incorrect is 0.9. For N models it is 1.0-0.5/N. The probability that all models fail then becomes: F=(1-0.5/N)^N which is about 0.6 (for N > 3). This gives us odds of 3 to 2 that none of the models are correct and it is more likely that none of the models are correct than that one of them is. (If we had taken F=(1-1/N)^N the numbers are about 0.34 with odds of 1 to 2)

Now an altogether different question. Suppose one of the models does give us the correct experimental result, what is the a posteriori probability that this model is indeed correct, given the results of the other models? Or, alternatively, that the model is incorrect even when it gives the ‘right’ result (by coincidence)? This posterior probability can be calculated using Bayes’ theorem,

P(X|Y) = P(Y|X)*P(X)/P(Y),

where P(X|Y) stands for the probability of X given Y and P(X) and P(Y) are prior probabilities for X and Y. In this case, X stands for ‘the model is incorrect’ and Y for ‘the result is correct’, in abbreviated form M=false, R=true. So the theorem tells us:

P(M=false|R=true) = P(R=true|M=false) * P(M=false) / P(R=true)

On the right-hand side the first term denotes the false-positive rate of the models, the second term is the probability that the model is incorrect and the third is the average probability that the result predicted is accurate. Of these we already know P(M=false)=0.9 (for 5 models). In order to get a handle on the other two, the ‘priors’, consider this results table:

The ‘rate’ columns represent a number of possible ensembles of models differing in the badness of the incorrect models. The first lot still give relatively accurate results (incorrect models that often return the about correct result, but not always; pretty unrealistic). The last with seriously poor models which on occasion give correct results (by happy coincidence) and a number of cases in between. Obviously, if a model is correct there is no false-negative (TF) rate. The false-positive rate is given by P(R=true|M=false) = FT. The average true result expected is given by 0.1*TT + 0.9*FT = 0.82 for the first group, 0.55 for the second and so on.

With these priors Bayes’ Theorem gives these posterior probabilities that the model is incorrect even if the result is right: 0.87, 0.82 etc. Even for seriously poor models with only a 5% false positive rate (the 5th set) the odds that a correct result was made by an incorrect model are still 1 to 2. Only if the false positive rate (of the incorrect models) drops dramatically (last column) can we conclude that a model that produces the experimental result is likely to be correct. This circumstance is purely due to the presence of the incorrect models in the ensemble. Such examples shows that in an ensemble with many invalid models the posterior likelihood of the correctness of a possibly correct model can be substantially diluted.

——– 

127 thoughts on “The Climate Model Muddle

    • Yeah but…. if they can get enough people to agree that 2 + 2 = 5 with rounding, then anything is possible. Like Bill Clinton, “It depends on what the definition of “is” is.”

      Note: 2.4 + 2.4 = 4.8, for most folks of at least average intelligence. But if you can convince them that using rounding, where 2.4 becomes 2 and 4.8 becomes 5, will produce 2 + 2 = 5, then you can convince them of anything. Call it New Math for an Orwellian New World Order.

      • Math is not a strong point of those who accept the IPCC’s distorted ‘science’ and even many skeptics have been misdirected by their bogus math.

        The undeniable truth is that each of the 240 W/m^2 of average RADIANT solar forcing uniformly results in 1.62 W/m^2 of average RADIANT surface emissions, where for each W/m^2 is offset by the forcing, 620 mw/m^2 are offset by delayed ‘feedback’ from the atmosphere (a more proper use of the term).

        Far too many believe that 1 W/m^2 of forcing plus 600 mw/m^2 of feedback from the atmosphere is somehow amplified by magic they misrepresent as ‘temperature feedback’ in order to offset the 4.4 W/m^2 of incremental surface emissions that arise from the nominal 0.8C temperature increase claimed to result from the next W/m^2 of forcing.

        It’s like they don’t comprehend the implications of 1 Watt being 1 Joule per second, that Joules are the units of work and that in the steady state, the only work required is to maintain the temperature by replacing emissions proportional to T^4.

  1. Don’t mind me, I don’t have anything to offer. I’m just waiting to see how Nick or Mosh will explain to us how the entire exercise above is wrong.

    • But Nick and Mosh only weigh in when they think they sense a weakness in the argument, or think they can drive a wedge between opponents. Otherwise, crickets.

  2. I would love to see the epidemiology world try to adopt the climate scam approach of a CMIP-like ensemble mean of many epidemiology model outputs to “predict” the future trajectory of SARS-CoV-2 infections.

    The immediate outcry of methodological inappropriateness of such an epidemiology ensemble from actual scientists would then expose the naked-emperor reality of the CMIP climate model ensemble scam being foisted on the public by the IPCC.

    • Forecasts are continuous and differentiable functions over a specified, notably limited path. Predictions use inference to step over missing links, smoothing or puncturing to reduce complexity, and forcing to match a preconceived conclusion.

    • Models (e g. evolution) are hypotheses about states and transitions, which may or may not coincide with reality.

  3. This one of the most enjoyable reads I’ve seem in some time. Great entertainment. Can you write another. Except one based on how to create a new emotion. One more thing can you change “Mike” to “Dan”.

  4. I realize this was not written by a native English speaker, and could figure out most of what the writer was saying. But the way the word “Alacrity” is being used …. ????

      • How is this sentence correct?

        “Alacrity all around in the sycophantic main-stream press, with sometimes hilarious results.”

        By your reference, alacrity is a noun, which makes this a sentence fragment — but that’s just to start — maybe he wanted to use a sentence fragment. However when Alacrity is used as a noun, it almost alway comes after “with”, as in some job or service performed ‘with alacrity”. (Notice that is how alacrity is used is in the example in your reference)

        From context, at a guess, the author is trying to use “alacrity” to mean something like “cheerfully and promptly accepted” or “cheerfully and promptly promoted”. Assuming this to be the case, what the author needs to say is something like

        “Accepted with alacrity all around in the sycophantic main-stream press, with sometimes hilarious results.”

        or

        Promoted with alacrity all around in the sycophantic main-stream press, with sometimes hilarious results.”

        And as long as we’re editing this prose for readability, note also that “all around” is not wrong, exactly, but it is somewhat unusual diction — replacing it with the word “everywhere” is probably better. The final result —

        Promoted with alacrity everywhere in the sycophantic main-stream press, with sometimes hilarious results.”

        • Using a sentence fragment is an effective way to make a point as long as the meaning is clear, and I say that with alacrity.

        • D. Cohen
          I prefer your construction, but sometimes is is just a matter of style.

    • “Alacrity all around” also struck me as weird. But OK, I didn’t know the author’s not a native speaker. So I must forgive “slight of hand.” Should be “sleight of hand,” of course.

  5. All models are based on one common fundamental flaw: the greenhouse effect which does not exist.

    1) the atmosphere makes the earth cooler not warmer.
    2) the GHGs require “extra” energy to “trap” and “back” radiate.
    3) the surface cannot radiate that “extra” energy by BB.

    1+2+3 = zero RGHE.

    • The back radiation is a misleading term. What is actually happening is resistance to radiation to space plus the effect of the lapse rate. The back radiation exists, but is nothing but trapped energy going back and forth (the resistance).

      • The radiation is much like that in a laser. The laser must be continually pumped or its output will soon go to zero. Just like a damped sine wave. The same thing happens with the Earth. Any radiation from the atmosphere toward the earth will be re-radiated by the earth toward space. Some of it will escape and some will “bounce back” from the atmosphere. This will continue as a damped sine wave. Only if the laser (i.e. the earth) is pumped again (by the sun) before the sine wave has reached zero will there be residual heat left behind.

    • Nick….
      instead of being confused with photons and back radiation and the like, clearly not the way your mind works, let’s just assume photons aren’t “heat” until they are absorbed by something, we’ll just talk old school caloric “heat”….
      Referring to the Trenberth type chart published here on WUWT just a week ago, June 22. The surface receives about 162 watts/sq.m of heat from the Sun, thermals take away about 18, evaporation takes away about 86, 40 is radiated from the ground directly to outer space, and about 18 is the net heat radiated from ground to clouds and blue or night sky. …
      There, notice I fixed the pesky back radiation that you insist is a recycle problem, just for you…..but trust me, its still there, just like your 37 C face emits 520 watts/sq.m worth of photons as you stare at the 20C ceiling pondering this problem. The ceiling is radiating photons back to your face at 480 watts/ sq.m. for a net 40 watts/sq.m. of “heat” that is supplied by your metabolism. Sorry, thats how the GHE works too.

      • Not exactly the correct analogy. The ceiling is at a low temperature and intercepting a portion of your 520 w/m^2 and sending it back toward you raising your temperature higher than 37C. Which would mean you begin radiating 520+ w/m^2, say 620 w/m^2. And on and on.

        Remember this isn’t a situation where the 520 is gone when it radiates, and part comes back, it is a continuous 520 w/m^2. So if you absorb part of what you’ve already radiated, you get hotter and get even more back.

        My thermodynamic training was that heat is a flux with a gradient from hot to cold. Different materials may have different heat resistances and subsequent gradients.

        A lot of people think in terms of photons being bullets that go in single directions but that isn’t what IR does. It is an EM wave with energy traveling in an expanding sphere i.e., all directions.

      • “…37 C face emits 520 watts/sq.m…”
        (273+37)^4 * 1.0 * 0.0000000567=523.6 W/m^2
        This assumes the face is a BB, it absorbs and emits ALL of the energy.

        But suppose about 55% of the energy leaving the face does so by conduction, convection, advection and latent.
        Emissivity = 1 – .55 or 0.45
        (273+37)^4*(1-.55)*0.0000000567=235.6 W.m^2.

        This is what my experiment demonstrated.
        Radiation does not function separately from the non-radiative processes.

        “The ceiling is radiating photons back to your face at 480 watts/ sq.m. for a net 40 watts/sq.m.”

        Total & complete nonsense.
        Both surfaces flow toward the nearest colder and even ultimate sink not toward each other.
        The measured in/out heat balance of my experiment detected exactly zero “net” radiation going “boink” between the surroundings and the heating element.

        In a closed system energy will flow from a hot surface toward a cold surface until they are at equilibrium, same temperature, and flow then stops.

        How ’bout you crafting an experiment demonstrating how hot & cold go “boink” somewhere in between to produce a “net.”
        Plug/unplug that refrigerator and record temperatures looking for any “boinking.”
        If this “net” were an actual thang there would be refrigerators without power cords.
        I haven’t seen any?
        You?

        All of this pseudo-thermodynamic nonsense exists to explain how the GHGs warm the atmosphere and the earth.
        The atmosphere cools the earth and the GHGs don’t “warm” anything which explains why the RGHE is so bogus.
        It explains what does not exist.

    • Let’s simplify.

      1) Because of the 30% albedo the earth is cooler with the atmosphere/albedo. W/o it the earth would be much like the moon not a 255 K frozen ice ball. Nikolov, Kramm (U of AK) and UCLA Diviner mission all recognize this.
      If correct the greenhouse effect is not.

      2) GHGs get their “trapped”/”back”/whatevah “extra” energy from the upwelling BB LWIR. I have demonstrated by experiment that because of the non-radiative heat transfer properties of the contiguous atmospheric molecules this BB upwelling LWIR is not possible.
      If correct the greenhouse effect is not.

      All of your assorted molecular level, QED, etc. handwavium explanations are for a greenhouse warming effect that does not exist and does not address either of my two points above.

  6. What you can’t use them for is making predictions.

    But haven’t we already known this for almost 20 years:

    “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.”

    https://www.ipcc.ch/site/assets/uploads/2018/03/TAR-14.pdf

    Section 14.2.2.2, p. 774

    • If all the models in the ensemble are wrong then there is no way for the ensemble to capture anything resembling the true value. The true value may lay totally outside the interval described by the ensemble.

      As Ed points out even a wrong model can, by coincidence, give a true value at a single point, especially if it is tuned to do so. That says nothing about the values predicted by the model at any other point however.

      Personally I still think the use of an “average temperature” is useless, especially a “global average temperature”. The climate is determined by the overall temperature profile, not by an average. Only a climate scientist can determine that the earth is going to turn into a cinder from higher temperatures by looking at an average. Most 6th graders can tell you however that you can’t determine minimum and maximums of a data set by looking at the average. And it is the minimum and maximums that actually determine the temperature profiles.

      • Tim:

        If all the models in the ensemble are wrong then there is no way for the ensemble to capture anything resembling the true value. The true value may lay totally outside the interval described by the ensemble.

        Which appears, by their own admission, to be exactly what a consensus of scientists at the IPCC has concluded as well.

        Thus why are we still arguing over the viability of models?

        What say you?

        • Sheldon,

          I enjoyed your article. Certainly a winter/summer average is more informative than a global average. However, you still lose data with an average of the real absolute temperature. For me the heating and cooling degree-days would be much more informative (i.e. the integral of the temperature curve below and above a target temperature over a time interval, e.g. 65degF).

          I would also point out that while your article alludes to it but never actually states it, climate is local or regional, it is certainly not global. There is no “global” climate.

          • Tim

            I agree with you, that climate is local or regional. It is not global.

            Nobody dies because the global average temperature increased. If somebody dies, it is because their local temperature increased or decreased.

            There are many places on the Earth which will be better off with a little “global” warming. But don’t tell anybody, because people might not panic if they knew the truth.

      • Tim,
        I think the assumption, which is not proven, is that half the models will be too warm, and half too cool. If that were the case, then the average should be close to being correct. However, I have not seen any evidence presented that the assumption is valid. All the model runs could be too cool, or all could be too warm, which seems to be the more probable situation. Then some model forecasts will be much too warm, and some slightly too warm, but the average will be worse than the ones that are only slightly too warm.

        And, it is even more problematic to average SSTs with LATs to derive a weighted-average that is actually meaningful for land animals and plants. The SSTs will dampen the range of the LATs and hide trends in the LATs.

        • As Pat Frank has shown, all of the models have an uncertainty interval that is wider than the anomaly interval they predict. In such a case no one knows if the average of the models is even remotely more accurate than any individual model.

          I agree that the ensemble could all be too warm or too cold, especially too warm. I also agree that the “global” average, including SSTs is meaningless.

        • But if the models have a systematic error, than the average is meaningless.

          If 100 models produce 2+2 equal to values all betwixt 70 and 80, the average will be about 75.
          Therefore we see that 2+2 is very near 75.

          If the models were anywhere near reliable, they should be able to replicate the climate beginning in the year 1800 to present (or actually, ANY time period or ANY time duration).

          Let me guess; they haven’t, the can’t and they will not even try because the know it will demonstrate that their models are simply a joke; totally wrong.

          Why nobody is challenging the modelers and AGW proponents to do this – to “model” the last 200 years of climate – as a “test” of the reliability of their models, totally escapes me.

          Instead all we see is discussions of feedback, photons, radiation, error propagation, etc. etc.

          It’s as if everyone cannot see the forest for the trees .

    • Yeah but the IPCC have done a bit of a slight of hand there.

      The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

      They implicitly imply that the final state of the GCM corresponds to a possible climate state and doesn’t allow for the possibility that the GCMs are not actually calculating valid trajectories for climate change at all and instead are projecting a trajectory based on a fit from the historic data as implemented in simplified physics and tuned from expectations.

      • You simply have no idea if the probability distribution projected by the models is an actual reality or not.

        As Clyde pointed out, if all the models are too warm or too cold then the probability distribution will be the same, too hot or too cold.

    • The problem with all models is that none of them can simulate the climate. The climate is not a continuous function. It is a collection of intermittent, discontinuous processes that, as the IPCC notes in the quote from Syscomputing, are chaotic and can’t be predicted.
      Even “The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. ” is not true because there is only one climate and it is not predictable.
      The climate functions on processes from sub-milliter scale to millions of kilometers(the sun’s magnetosphere clearly affects the earth’s climate in unknown ways). The processes are all chaotic in behavior so using averages, means, or any other mathematical trickery doesn’t make the climate and models congruent. The average difference between night and day temperatures can be as little as, say 5°C- 30°C-40°C or -50°C-60°C. There is no way either would have a similar effect on the climate.

  7. Much of this post is a physical explanation of why Pat Franks work on the lack of reliability of climate models is valid. Ed Zuiderwijk agrees with Pat on the utility of those model projections.

  8. Why don’t we use the models to identify the deficiencies in data collection and stop trying to pretend that anything we derive from the short, sparse dataset we have will have any confidence? Let’s start building the multi generational dataset for the future.

  9. Cute little story.

    But at the end of the day, you’re still left with a “political model” that says we’re all gonna die (someday), and, even after tweaking the climate record (also known as lying, cheating, or cooking the books), none of the political dudes gets too excited about the fact the model doesn’t even come close to predicting reality…

    …and nobody can figure out what further data tweaks are required to make reality conform to the models.

    Bummer.

  10. Might as well laugh them out of existence, are there any mainstream TV or other comedians that could do this in a little less complicated but colorful way? A/C repairman showed me a video of the Austin student mob, laughing at them, not helping their reputation.

  11. “What you can’t use [the models] for is making predictions.”

    Not a problem, as we all know, the models don’t make predictions, they make projections.

  12. The only models worth following are tall and skinny and they walk on runways.

    There are climate computer games, but no real models that make accurate predictions … and they are still not accurate affter all the surface temperature data “adjustments” and infilling.

    One problem with models is the biased bureaucrats who own them — models are the personal opinions of the people who own and program them.

    They predict what those people want predictef, and what they are paid to predict.

    Their output is not data — just a personal opinion converted into complex math to look scientific and impress the general public.

    For climate buteaucrats poor climate predictions have never been a problem — failing to predict fast global warming in the future, however, could get them fired.

    (1) Global warming in high latitudes, in the colder months of the year, and at night, would be good news.

    (2) Warming in the tropics, in the summer, during the day, might be bad news.

    Guess what the warming since 1975 was most like.

    Yes it was good news — see (1)

    And mild warming has been happening since the 1690s — good news for the past 325 years !

    Climate alarmists would have us believe that continued global warming MUST be bad news even though past warming was good news.

    Very hard to believe that alarmist claim.

    We’ve heard predictions of a coming climate crisis since the 1970s — it’s always coming … but never arrives.

    Common sense is something you’ll never get from a climate model or a climate alarmist.

  13. “The odds of about 3 to 2 that none of the models is correct ought to be interesting politically speaking. Would you gamble a lot of your hard-earned cash on a horse with those odds? Is it wise to bet your country’s energy provision and therefore its whole economy on such odds?”

    Observed global temperature rise pushing 0.2C/decade and you want to bet the whole economy on ‘do nothing’?

    • That’s what they call a straw man. A textbook example. (And therefore not worth a serious answer)

      • Nonsense. The unspoken and unproven assumption in your post is that not only are the models
        all wrong but they are all consistently wrong by being predicting too much warming. Given that CO2 is a greenhouse gas then the amount of warming produced by doubling the atmospheric concentration is going to be greater than zero. Thus if you know nothing about the models or the underlying physics it is more likely that models under estimate the amount of warming since there are a lot of numbers greater than 2 than there are between 0 and 2. Simply stating that the models are unreliably doesn’t improve matters but in fact makes things worse.

        • The unspoken and unproven assumption in your post is that not only are the models
          all wrong but they are all consistently wrong by being predicting too much warming.

          That “unspoken and unproven assumption” is also the de facto assumption of a consensus of scientists at the IPCC.

          Thus if you know nothing about the models or the underlying physics it is more likely that models under estimate the amount of warming since there are a lot of numbers greater than 2 than there are between 0 and 2.

          Speaking of textbook logical fallacies. That’s petitio principii if I ever saw it.

          You presuppose your conclusion (“more likely that models under estimate the amount of warming”) in your premise (“since there are a lot of numbers greater than 2”). That’s a non-sequitur, first of all.

          Secondly, if your first premise (“you know nothing about … the underlying physics”) is true then nothing can be said after that, since the underlying physics of the workings of the climate is exactly what must be known before predictions are possible. And this first premise is exactly the first premise of the scientific consensus at the IPCC:

          “In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

        • Walton
          You said, “Thus if you know nothing about the models or the underlying physics it is more likely that models under estimate the amount of warming …” That really is an assumption. All the models have historically been running warm. That is, empirical evidence trumps your speculation.

        • Let us ignore the models and apply logic on its own. There are an infinity of positive numbers. There are an infinity of negative numbers. The centre point is zero, so therefore the true warming is more likely to be zero than anything else.

          • Lets try that argument … there are infinite distances from the equator to the north pole, there are infinite distances from the equator to the south pole. Therefor you are most likely on the equator …… layman fail 🙂

        • “ there are a lot of numbers greater than 2 than there are between 0 and 2.” have trouble in math class, did we?

    • What proposition would you have as necessary to do ?
      Btw, compare your mentioned “rise” with what the models are telling, and that bet 😀

  14. “There is only one ‘true’ ECS value that corresponds to the doubling of CO2 concentration in the real world.”

    Who says the ECS has to be a constant value in the real world? I suspect that ECS is a function that depends on CO2 concentration (and other atmospheric components), temperature, and probably many other things.

    • WR2- The assumption that there is such an entity known as “ECS” is just another climate con- a pipe dream, a fantasy, a joke. The idea of the ECS– there being one magical, determinable number with regard to CO2 levels determining world temperatures is just an attempt to add legitimacy to the con, the fantasy, the joke, that CO2 levels CAUSE the “warming”, or for that matter, the “cooling” of global temperatures. (Yes, some researchers have come up with negative ECS values!) First, there is the very questionable concept of “average global temperatures” and then one has to believe that CO2 levels control that average temperature– which is proven false by historical records. There are millions of climate inputs, and then add in the huge range of preceding climate conditions as to when those inputs are actually being a forcing, and the idea of finding this mythological creature known as the “Equilibrium Climate Sensitivity to a Doubling of Atmospheric CO2” becomes just a vain hunt for an Unholy Grail. No wonder ECS values are all over the place! Depending upon when one looks and with what methodology one looks and with what assumptions one looks with–you come up with a “number” but you might as well claim you sighted Big Foot.

  15. On March 18th California governor Newsom wrote a letter to the president saying “We project that roughly 56% of our [California’s] population – 25.5 million people – will be infected with the virus [coronavirus] over an 8 week period.”

    The models used to make this projection were wildly wrong – off by 99.8% or 25.4 million! Even today, 15 weeks later, the number of infections in California is only 232,000. Global infections are only 10.5 million! This week the number of new infections in California were 7,000 in one day, which is the highest daily total of new cases since the pandemic started. Yet that number is 430,000 fewer that the daily average would have had to have been to infect 25.5 million people in just 8 weeks.

    Yet politicians relied on these extremely flawed models to make policy decisions that impacted every person in California.

    The difference between the coronavirus models and climate models is that the coronavirus models are shorter term so they are quickly proven to be wrong. Most of us won’t be alive to see how badly climate models fail – or not – in 2100. Unfortunately climate policies will be enacted long before the climate models are ever validated.

  16. The basic premise of modelling, that there must always be a natural function for everything in nature, is incorrect. Although, for example, there are simple natural functions that describe electrical current, there aren’t for the climate. Articles explaining climate change alarmism to laymen basically say that there is a simple linear function, similar to Ohm’s Law, that can be used to derive the average surface temperature of the Earth from the concentration of carbon dioxide in the Earth’s atmosphere. This has now been shown to be false. There was a pause in the increase in surface temperatures during a period when carbon dioxide emissions were increasing. You would have thought that even innumerate people would understand that the climate change theory had then been falsified, but no. All of a sudden climate change alarmists were very big on convincing people that the climate is complex and the functions used to describe it are multivariate. And now, having confused and intimidated people with that excuse, they have returned yet agaub to the reliable story that predicting the climate is simple and the imminent worsening of it is the fault of oil companies. You can’t win with these people.

    • “You would have thought that even innumerate people would understand that the climate change theory had then been falsified, but no. All of a sudden climate change alarmists were very big on convincing people”

      I think that the innumerate people, that is to say the general population, knows very well that the theory has been falsified, that is why it ranks very low In every poll of what issues concern them. On the other hand, the climate change alarmists have known all along that it was false; their agenda is to sell snake oil.

    • This is a sad story about why models often can’t be predictive.
      A friend of mine has a degree in aeronautics, flew fighter jets and commercial airlines. He had a friend who wanted to build the fastest possible homebuilt plane- some sort of competition. Greg worked on the aerodynamics of the plane with the minimum size fuselage, and a canard(elevator up front, the wing behind the pilot.
      Greg was able to model the behavior of the wing and canard. He and the pilot went over the results. One airfoil on the canard was estimated to go about 3 mph faster, but when it stalled(a condition that can’t be modeled because the airflow changes from smooth to turbulent) it was very abrupt and the plane would likely lose a fair amount of height. The friend was adamant about “the fastest” and they went with the touchier airfoil.

      The plane was built and the pilot started testing it. It went very fast, it seemed quite airworthy. Everybody involved was hopeful.

      On a test flight before the actual speed test the weather changed and the winds got very gusty and erratic. As he was coming in for landing the plane was hit by a 25knot gust from the rear quarter. The loss of airspeed over the canard triggered a sudden stall and the plane, at maybe 25 ft altitude, snapped straight down at over 100knots. The pilot was killed and the plane completely destroyed.

      This sad incident demonstrates why climate models can’t work. Air, water, wind, and waves are all subject to abrupt changes in behavior due to turbulent flow. Mathematical models can predict when it might happen but they can’t predict when or what happens next. It’s the difference between a straight line wind of 100mph and a tornado. Smooth functions and chaotic behavior. The climate is anything but smooth functions.

  17. There is an unstated assumption in this story – that the outcome CAN be modeled to be predictive. If a system behaves chaotically, then the best one can do is present some probabilities per units of time – one cannot predict a given future far enough out with any reasonable degree of error.

    I believe there is a degree of chaotic behavior in climate that makes ANY prediction far enough into the future a completely useless exercise. And no, we do not know how far out in time that chaotic event will occur or what it will produce. We don’t even understand what triggers an event – likely there are multiple triggers.

    So that aside, if you have a system that relies on 100 variables but only take the most important 10 into account, your model – by definition – is wrong. otherwise those other 90 variables are NOT relied upon and need to be discarded anyway. This is due to precision and iteration of the output to be used as input. We do not even know how many variables the climate system is reliant on, nor how each impact the result, nor how each interacts with the other. Writing a model to try and understand all of this might be helpful to the few trying to work out the details, but is absolutely useless for prediction.

    Writing 20 or 40 or 100 models just shows how little we understand the system. Averaging the results is about the MOST unscientific method I have ever heard of. It’s like taking 99 bad sets of data and averaging them with one set of microcode data to try and get better data. Who the H*LL made that one up?

    Because the output’s veracity is unknown, we don’t even know how or where to check for errors in the programming code. All we know is it can be adjusted to produce output similar to a historical set of (tampered with) data. Errors in coding are common and can be difficult to locate even when one knows it is there…imagine finding bugs one does not know is there.

    They don’t even have to be coding bugs – just misunderstood behaviors of computers and their limits. A simple round-off error when used in a loop a few million times can produce a staggering different value then one adjusted to be more accurate. Floating point numbers? Run them on one computer architecture and get one result, then on another computer architecture and get an entirely different result! Again, it’a the precision and how it is handled in the code.

    • Robert,
      Chaotic systems are deteministic. Which means that they can be modelling to an arbitrary degree of
      accuracy as far out into the future as you like. Only random systems cannot be accurately predicted.
      Futhermore most chaotic systems are bounded so unlike a random system there is always a maximum and minimum value beyond which they will not stray. They also tend to be ergodic so that if you start
      from two different points and take the average along the different trajectories you will get the same answer.

      What this means is that it is possible to model the climate more accurately than the weather. Which is something that is fairly obvious when it comes to making statements like “in 100 years time summer will be hotter than winter” which everyone will agree with. In contrast if I state that it will rain at mid-day at a particular location 100 years from now nobody would believe it.

      • Chaotic systems are deteministic. Which means that they can be modelling to an arbitrary degree of accuracy as far out into the future as you like.

        An “arbitrary degree of accuracy”? What does that even mean, if anything?

        Regardless, only according to laymen Izaak Walton, et al. Not according to the consensus opinion of climate scientists at the IPCC (emphasis added):

        “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

        https://www.ipcc.ch/site/assets/uploads/2018/03/TAR-14.pdf

        Section 14.2.2.2, p. 774

      • “Chaotic systems are deteministic….”

        This is true with two very important caveats if you want to predict the chaotic system’s behavior accurately:

        1) You must know the Initial conditions of the system to the last decimal point
        2) Your model must be a perfect representation of the chaotic system

        If either of these things are not true, your model will quickly diverge from the “real” system.

        If by “ergodic” you mean “systems that, given enough time, will eventually return to previously experienced state” then chaotic systems are not “ergodic” as they are non-periodic and never exactly repeat a sequence of values which means taking averages along different trajectories will not necessarily give you the same answer.

      • Walton
        You said, “Futhermore most chaotic systems are bounded so unlike a random system there is always a maximum and minimum value beyond which they will not stray.” Assuming for the sake of argument that you are correct, and the alarmists who warn about a “Tipping Point” are wrong, there is geologic evidence that those bounds are much larger than the concerns expressed by climate alarmists.

      • Everything in classical physics is deterministic in that every event must have a cause. Only quantum events are non deterministic, which prompted Einstein to say “God doesn’t play dice with the universe.” In classical physics, you can in theory reconstruct a message that was on a piece of paper and then burnt if you know the position and velocity of every single atom at the exact same moment in time and calculate backwards. But nobody would suggest that that could ever be done in practice.

      • A chaotic system is HIGHLY dependent on initial conditions in order to be deterministic *and* accurate. That means that trying to forecast or predict future temperatures to the hundredth digit when the inputs are only known to the tenth digit (or even worse to the units digit) makes the chaotic system model highly undeterminstic.

        In addition, random events *do* happen in reality. If the current Sahara dust cloud impacts the “average” temperature of the globe at all but the CGM’s don’t about it then their output for 2020 will be wrong and that will be reflected in every annual iteration done for the next “x” number of years. In other words the initial conditions for each iteration will be wrong and even if the CGM is chaotic and deterministic it will still be deterministically wrong.

        • sycomputing, RicDre, Tim Gorman well all said!

          In an overall chaotic system that poorly initialized and based on limited measurements, (and is in turn dependant on many other chaotic subsystems as well as many random events) computer modelling by statistical averaging of many variables, parameterizing and adjusting others, while excluding yet many others, is not a logical approach.

        • Tim
          You raise an important point that I think deserves amplification. Assume that you have correct initial conditions for a series of iterative calculations. Ideally, you might be able to make a correct forecast. However, ANY future perturbation, (butterfly flapping) will make the nth calculation different from reality. If that point in time becomes a virtual initial condition(s), then all subsequent calculations will diverge from reality, leading to a forecast that is wrong.

  18. The current atmospheric CO2 levels already reflect 94-96% of the long wave radiation back. How much can an additional 4-6% blockage due? The problem comes in “modeling” what the sun will do.

    • Bill Hirt

      Where did you get the value of 94-96% from, for the proportion of the long wave radiation already being reflected?

      A model that I developed gives a proportion of about 40%. That is enough to raise the average temperature of the Earth from about -18 degrees Celsius (the Earth with no atmosphere), to about 15 degrees Celsius (the present situation).

      Warning – the temperature increase does not increase linearly with the proportion of long wave radiation reflected. Blocking the last few percent would cause a huge increase in the temperature of the Earth. The Earth would vaporise if the proportion got to 100%

  19. ” A curious aspect of such ‘homogenisation’ is that whenever it is applied the ‘adjusted’ past temperatures are always lower, thus making the purportedly ‘man-made warming’ larger.”

    WRONG.

    there are TWO datasets used to create a Glonbal average

    TWO
    TWO

    GET THAT THROUGH YOUR THIKC HEADS

    1. SAT
    2. SST

    SAT is surface air temperture
    SST is Sea surface temperature

    SAT is 30% of the average
    SST is 70% of the average

    A curious aspect of such ‘homogenisation’ is that when it is applied the ‘adjusted’ SAT past temperatures are always lower, thus making the purportedly ‘man-made warming’ larger.

    A curious aspect of such ‘homogenisation’ is that when it is applied the ‘adjusted’ SST past temperatures are always HIGHER, thus making the purportedly ‘man-made warming’ SMALLER.

    A curious aspect of such ‘homogenisation’ is that THE COMINATION the ‘adjusted’ past temperatures are always HIGHER, thus making the purportedly ‘man-made warming’ SMALLER. This is because the
    SST is 70% of the total and its adjustments are MUCH LARGER than the SST adjustment

    It is false that adjustment increase the GLOBAL WARMING.

    GLOBAL
    GLOBAL
    GLOBAL

    as in the whole planet

    it is true that the adjustments

    LOWER the warming in SST
    Raise the warming in SAT
    but considered together in the GLOBAL RECORD
    the adjustments

    LOWER
    THE
    WARMING

    • Can you identify a specific location where the adjustments have decreased the warming trend at that location? Perhaps you could provide the information for the place you live. That would be sufficient to negate the assertion you object to.

    • Steven,
      Using Ed’s method of colour as a comparative parameter, let us select another. Use salinity as a global property that is a bit familiar.
      We have TWO data sets.
      TWO.
      Sea salinity and Land salinity. One is trending lower over time and one is trending higher.
      Can we take a global average?
      Maybe we can get a mathematical average, but it might not help us much with understanding physical processes.
      We can postulate that the oceans are becoming more saline because salty land is eroding into them. There is some sort of coupling factor that we need to measure and understand before we can take a global average that is valid because it varies over time.
      The scientist might say that it is invalid to try to take a global average because the nature of the numbers prohibits that. Intensive or extensive properties.
      The geologist might say that there are many processes affecting ocean salinity, like fresh rock exposures to leaching by the sea along mid ocean ridges, plus from submerged fumaroles. These happen at unpredictable times.
      The climate researchers might say that salinity has an anthropogenic component because salt is used in big manufacturing processes that are, by definition, harmful to Life as we know It.
      The realist might say that we can take a global average salinity with a grain of salt.
      ……..
      Now, relate this to models for climate sensitivity.
      Sadly, we cannot, because (we presume) there is only a certain, fixed amount of global salt, whereas there is a daily addition and subtraction of global radiation intensity for CO2 to control via its knob.
      You can now grasp this analogy, contemplate its lessons and learn what it tells you about school kid levels of blogging about how simple global temperature concepts are. Geoff S

    • The ‘homogenisation’ of the temperature records kept by KNMI at de Bilt, the Dutch national weather service, made the 1930s a lot colder than my own good father remembers it. And the newspapers of the time reported it.

    • Huh? If SST is 70% of the total and its warming is smaller then how can the total be higher? That would mean the SAT warming would have to be at least double over time in order to offset the lower SST!

      Are you truly trying to imply that land temperatures are going up twice as fast as the ocean surface temperature?

      • A bigger issue is that no quality long- term SST record exists, so any quantification of “warming” is garbage.

      • Tim Gorman says:

        “Are you truly trying to imply that land temperatures are going up twice as fast as the ocean surface temperature?”

        That is what the temperature data sets appear to show:

        https://www.woodfortrees.org/plot/crutem4vgl/plot/hadsst3gl

        Which is how you get the internet “meme” whereby anywhere you google is warming twice as fast. Since places people google are generally on land (ie “places”) then of course they are warming “twice as fast”, because the planet is 70% oceans, which are apparently warming at a much slower rate.

        • Thinking,

          The problem is that the land temps are *NOT* going up twice as fast as the oceans. For much of the North American continent maximum land temps are actually going down.

          The graphs you link to are for *average* temperature anomaly for an “averaged” set of data. The use of averages totally loses the capability of telling you what is going on.

          If I tell you the average is 50 can you tell me what the maximum and minimum data values are in the data set that gives that result? If I tell you that the next iteration of the data set gives an average of 50.1 can you tell me the maximum and minimum data values in the new set of data?

          The climate is determined by the overall temperature profile, including maximum and minimum temps. It is *NOT* determined by average temperature. A sixth grader can understand this. Yet the climate alarmists want us to believe that an average going up can only happen if maximums go up resulting in the Earth turning into a cinder. An assumption that simply cannot be made from an “average” temp. It’s why I advocate for the climate scientists to begin using degree-days (both cooling and heating) which are an integral of the temperature curve instead of average temperatures.

  20. How to drive those with geological and/or chemistry backgrounds.

    “We want to make a compound that produces colour somehow (the mechanism how it does that is not really relevant). However, we specifically want a well-defined colour, prescribed by whatever application it is going to be used for. Say a shade of turquoise.”

    The chemical formula, very roughly, for turquoise is CuAl6(PO4)4(OH)8 · 4H2O.

    Taking another example, an extremely desired coloring material by artists over millennia is lapis lazuli.
    Lazurite, the central coloring component of lapis lazuli has a chemical formula of Na6Ca2(Al6Si6O24)(SO4,S,S2,S3,Cl,OH)2.

    When Lapis is crushed and mixed with hardening oils/resins the color is reasonably stable. Turquoise, not really.

    Aluminum Oxide is technically corundum, better known as sapphire and ruby.
    The sapphire/ruby’s formula is Al2O3. The perceived colors are contaminants.
    Padparadscha sapphire is apricot, salmon or pink sapphire; colors cause by trace amounts of Fe and Cr3+.

    There are other gemstones that are primarily Al, aluminum compounds.
    Topaz’s chemical formula is Al2(SiO4)(F,OH)2

    Rubellite tourmaline, i.e. pink to red colored tourmaline has a chemical formula A(D3)G6(T6O18)(BO3)3X3Z.
    Where:

    A = Ca, Na, K, Pb or is vacant (large cations);

    D = Al, Fe2+, Fe3+, Li, Mg2+, Mn2+, Ti (intermediate to small cations – in valence balancing combinations when the A site is vacant);

    G = Al, Cr3+, Fe3+, V3+ (small cations);

    Si can sometimes have minor Al and/or B3+ substitution.

    X = O and/or OH;

    Z = F, O and/or OH.

    Amethyst and smoky quartz share a simple chemical formula with clear quartz of SiO2.
    Radiation causes the amethyst and the smoky colors.

    Now, what was that you are saying about elephants wiggling their trunks?

    Using colors and their formulation to invent a simple construct is an oxymoron. Colors and their formulae are anything but simple.

  21. From the essay:
    “ There is only one ‘true’ ECS value that corresponds to the doubling of CO2 concentration in the real world.”

    Another climate assumption without evidence – that CO2 is the climate control knob. If the doubling of CO2 has any effect on temperature, that effect could depend on any of a multitude of other variable factors, perhaps even allowing for a possible range of outcomes much greater than 3. Perhaps, by the logic of the thought experiment in the essay, the probability is that ECS approaches 0.

    • There’s no such thing as ECS, in the real world it is swamped by emergent phenomena.

    • ECS, the Charney sensitivity, is defined as the long-term temperature effect of a doubling of CO2 concentration after the system has settled down in its new state, that is after all feedbacks and other changes have worked themselves through. It is difficult to see why the process, if it were repeated ab initio, should not give the same result, unless the result depends on the history of how the final state was reached. If that were the case then all models per definition are off, because any modeling would be impossible.
      The notion has no bearing on whether the CO2 drives the climate or not. That depends on the value. If ECS is only about 0.5, which is what I consider a realistic estimate, then its role in the energy balance of the planet is marginal compared with other factors.

  22. Look at the chaos, globally, computer models have foisted on everyone with with the COVID-19 sc@mdemic!

  23. There are some qualifications to be made. What, for instance, does it mean: the model is ‘incorrect’? It means that it could be incomplete — there are concepts or principles missing in it that should be there — or, conversely, over-complete — with things that are but should not be there — or that there are aspects of it which are just wrong or wrongly coded, or all of those

    Going back to first principles of science, or ‘natural philosophy’ – it becomes clear that all science is models, and all are incomplete – all are simplified representations of reality that are intrinsically and inevitably* less than the reality, itself.

    Incompleteness therefore cannot be the problem, and problem there most certainly is. The problem in the end is simply one of utility. Some models like Newtonian and Einsteinian gravity give useful answers. Climate change models do not, in the sense that they predict the future climate. Of course they are most useful in achieving political control of energy, which is why they are promoted.

    I think it is important to avoid trashing climate models for the wrong reasons. That leads to an anti-science mentality that can use the same justification to trash useful theories that do work, on the grounds that they cannot be complete. Indeed the whole thrust of the Left’s anti-science is based on the Post Hegelian and Post Marxist faux proposition, that the truth is relative to culture and culture can be changed, and therefore new truths can emerge. The facts of the world, according to them, are subject to the consciousness that perceives them. This ‘magic’ thinking is behind all the BLM and cultural diversity and gender politics and the attack on religion – these are all attempts to change the reality of the world, by changing how people think about it.

    The fallacy, is that whilst they may come to dominate people’s perceptions of the world, they do not change what the world that is being perceived, actually is.

    You may utterly believe that gender is a matter of choice. But it won’t – as we coarsely say here – put tits on a bull.

    I think this is a very key and very important point that distinguishes between those who are broadly Left-thinking and those who are broadly Right. The Right is, in the end, more humble and does not believe the world can be fundamentally changed by changing people’s beliefs, whereas the Left in its arrogance believes that all it takes is blind faith in emotionally satisfying principles.

    Politicians and those in the business of making profits of course realise that for their purposes, what is, in fact, the case, is really of little interest, since their concern is solely the manipulation of people’s perceptions in order to get them to do, or let the manipulators do, what they want [them] to do.

    The utility of climate change models (and indeed all politically correct ‘woke’ models of today) as with any religious system, lies not on their ability to accurately predict the future, but in the construction of a public world-view that controls a moral framework that dictates the actions of the masses and sanctifies the actions of a few.

    They are all very very bad science, but they are very very clever and very very good examples of ‘headology’ = getting people to believe in stuff that affects their behaviour to the extent you can enslave their minds.

    And for those of a Christian persuasion, whilst I would say that at the core, Christianity is no different to any other ‘headology’ – it is a far far wiser and more benign belief system, than the ‘woke’ politics of the Left.

    *the reasoning behind that is off topic and long.

    • Too little attention is paid to both the Old and New Testaments, not necessarily from a spiritual perspective but from a societal basis. We appreciate the Founders of the U.S. for the humanitarian aspects of the Declaration of Independence and the Constitution. We should also revere the authors of the Bible for the foundational principles of how to live together in a just and respectful society.

  24. Didn’t some say with 6 parameters you can make an elephant fly? Where is John Galt?

  25. The problem I have with the precautionary principle (I think someone implies above) is it assumes there is nothing to be lost. But what if we have it wrong, doesn’t that mean we don’t know what’s really going on, we have failed to understand long range weather putting our forecasting of dangerous droughts, floods, storms etc back decades and our climate forecasts way out all meaning we are potentially in an even more dangerous situation. And unnecessary actions have weakened us too much to cope and we didn’t prevent an extinction; other actions would have been more effective for the environment. As a hypothetical situation what if it’s about many indirect solar factors influencing stratosphere, jetstreams, oceans, influencing indirectly weather hence eventually climate and temperature and it’s nothing to do with CO2 or temperatures. So by looking at CO2 and temperatures we completely miss what’s going on, and CO2 has little impact and indirect solar changes weather patterns with disastrous impacts and eventually, indirectly temperature changes.

  26. I too don’t understand the confidence in models if not only did they not forecast the pause (well, I can accept they may have failed to forecast it if something changed they could not be expected to forecast) but to then take many years to explain it, when the relevant measurements from all the sensors around the world, in space etc were presumably coming in every day and could be put straight into the model doesn’t make sense to me how we can be so confident in the models. I must be missing something.

    Regards models failing due to it being a very difficult task: I confess I don’t know nearly enough, but what about the option of models failing because the wrong approach is used?

    For example; doesn’t focusing on temperatures and averages miss far too much of the thermodynamics. I presume we put sensitivities in (e.g. to CO2) which I do not understand why we do that or think in terms of drivers or sensitivities; i.e. it is what it is but it isn’t; sensitivities irrelevant; isn’t it too complex to simplify that way? It seems to me that kind of approach leaves you dangerously open to assumptions.

    Aren’t temperatures heavily influenced by weather? Isn’t weather heavily influenced by jet streams and stratospheric factors? Or have I got that wrong? How well do we understand what influences stratosphere and jetstreams and incorporate that in models?

  27. “The tweaked model likely is a new incorrect model that coincidentally produced a match with the data.”

    No, not really a coincidence; the model was repeatedly tweaked to produce the desired result.
    A “coincidence” suggests a random concurrence of events or outcomes.
    The tweaking of the model was a purposeful and intentional to produce a desired outcome.
    That is not coincidence.

    Perhaps the thought experiment modelers should have averaged all the results !!!

    • Current climate models have a large number of ‘adjustables’, in many cases of order 10 or even higher. The point von Neuman made is that you likely can fit the data with any of those models. Just take a randomly selected model and go through the motions. The coincidence is in the fact that there is a model that can be made to fit.

  28. But if you do not believe in the climate models and the deleterious effects that a recent slight increase in atmospheric CO2 would bring, then you risk and international and national CO2 mediation targets never being met — all that fabulous technology of wind and solar gone to waste. All those carbon credit schemes wound-up! The modeled projections MUST be seen as worthy and to some degree accurate (even if they aren’t).
    For without a robust belief in the models you risk the loss of levies on CO2, the money drying-up resulting in the UN being left with very little money — not enough money to support it’s continual social & bureaucratic expansion.

    So think on, and realise how devastating that would be for the world!

    [I need a sarc tag?]

  29. “The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.”
    Freeman Dyson (RIP)

  30. Huh, this writing is hilariously bad. Even the addendum, the “Analysis of probabilities” is complete garbage.

    “We know that 4 of the models are not correct, so we know at once that the probability of any model being incorrect is at least 0.8.”
    This is an elementary error. Even if we disregard the fact that a model outcome is not a simple value but multiple with ranges, error bands, and time dependency etc. so even differing predictions don’t necessarily mean any of the models is wrong (signifying a complete lack of understanding of modelling by the author), if we have a one out of n situation, the probabilities aren’t simply 1/n (or (n-1)/n), apart from the extremely simple cases. This is such an elementary error that it makes it obvious that the author has no idea what he’s talking about.

    • nyolci:

      . . . if we have a one out of n situation, the probabilities aren’t simply 1/n (or (n-1)/n), apart from the extremely simple cases.

      So you would argue that a GCM is not an “extremely simple case,” is that correct? In my thinking, I’m looking at the results of a GCM being a single output value (predicted temperature) based upon variable inputs. Thus in the case of GCM ensembles, don’t we have a “one out of n situation” where one or all of the ensemble predictions are either true or false? And isn’t that an extremely simple case? How am I wrong in this regard?

      Thanks!

      • > And isn’t that an extremely simple case? How am I wrong in this regard?
        Completely wrong. An extremely simple case is coin toss. But here’s this one: I’m either levitating right now or not. This is a 1 out of 2 situation. It’s obvious that the probabilities are very different from 50-50%. Assigning probabilities in this manner is an elementary error. And pls note, that this is just a marginal side issue with the article, the whole thing is problematic.

    • I realise that my conclusions hurt. But given that you so ademantly know that I don’t know what I am talking about I may assume that you yourself do know. So, please, enlighten us and tell us how we should choose the best model from the five and what the likelihood is that we made right choice.

      • Ed,

        If all five models are wrong then how to you choose the best one?

        Besides which, if all five models are wrong they the probability is that their average will be wrong as well. They can either be all high, all low, or that they might span the true value. So there is two out of three chances, 67%, that you will get the wrong answer from them or their average.

        With no way to validate the models there is no way to know which possibility applies.. Since, in fact, they all run too hot compared to the satellite data and balloon data, it’s a pretty good guess that the first possibility is the most probable.

      • > But given that you so ademantly know that I don’t know what I am talking about I may assume that you yourself do know. So, please, enlighten us and tell us how we should choose the best model from the five and what the likelihood is that we made right choice.

        It’s extremely ridiculous that you got even this one wrong. ‘Cos this is obvious that I can point out your amateurism without having to tell you about which one is the best (or even knowing that). Your sentence “With 5 different outcomes we only know for certain that at least 4 of the models are wrong but not which ones” gave away the fact that you’re clueless about modelling. Okay, I understand that your answer is part of some kinda debating tactic like “turn the tables and make him the one who has to explain”, but of course I’m not the one with the obligation.

        Anyway, just to enlighten you, I give you some thoughts about modelling. I can give you these thoughts without knowing the specifics of the models. So models have initial conditions and external variables. All these models above may have these differently, even to the point that they represent different scenarios. I don’t know and I don’t care, for I only know that the modellers are scientists who may get their stuff wrong but these models are the result of a very long vetting and reviewing process so we may safely assume there are no serious errors in them.

        Also modellers have numberless runs with slight changes to the external variables. The actual results are coming from the statistical analysis of these runs. It means that a single model doesn’t have an answer like “4” but a lot of variables with time dependence and error bands that may be comparable in magnitude to the result. In this sense even a single model can have a result like 1.8-5 to a single question. This is entirely expected, climate is a chaotic and extremely nonlinear system with tipping points etc. Models are very good at predicting the tendencies and the magnitudes.

        All in all it’s very likely that all the models are good even if they predict different things. I know it’s hard for an outsider to swallow this but this is true regardless. (Actually it very likely that error bands overlap and this is what really counts). Equivalently we can say that very probably all models are wrong and this is not a contradiction. Especially in the presence of poorly known tipping points models may have wildly different short term predictions. These tend to even out in the long term, furthermore, models catch tendencies and magnitudes very well.

  31. “All models are wrong, but some models are useful.” Unfortunately, the utility of climate models is all too often assessed by how much funding they generate for their creators. Or by how well they “support” your biases.

  32. The analogy confused me more than anything.

    I think the most important things common to establishment climate models (AKA IPCC) are: uncertainty and flexibility. Uncertainty gives them a huge get-out-of-jail clause. They can make as many (models, projections) as they want but because it’s uncertain we’re not sure (for certain). That allows flexibility to add lots of kludges to the models (which help predict catastrophe), colloquially known as parameterizations; especially for positive feedbacks. But that’s OK, because they’re all based on the “known physics” of “settled science”; as every modeller (who wants a grant) uses the same basic greenhouse gas models descending from papers of Manabe and Wetherald (1967), and Held and Soden (2000). Anyone disputing whether Manabe and Wetherald / Held and Soden, even make sense is apparently disputing “settled science”. So is in “science denial” when they model radiative gas behaviour any other way. That’s why alternative models by: David Evans, saturated GHGE (Miskolczi), mean free path models, … are paid for by Big Oil, and are therefore evil.

    Summary: IPCC are angels. People who dispute them are evil. But good and evil are terms associated with sin and religion; so instead the evil people are called deniers, and the good people called “consensus”. Better – sounds far more scientific. The consensus have ‘uncertainty’ on their side allowing them vast scope to exaggerate via parameterizations. No matter what – any model not supporting catastrophic interpretations (directly or indirectly) is suspect; which is all models not cribbed from Manabe and Wetherald (1967), and Held and Soden (2000).

    • PS: Uncertainty also means it does not matter when they’re wrong; because they never said they were right. Despite hundreds of trillions of climate policy money riding on the projections.

      How wrong we all were thinking uncertainty was detrimental to the climate cause!

Comments are closed.