New study attempts to “squeeze out” uncertainty in climate models

From the “we’re gonna need a bigger computer” department.

Climate model uncertainties ripe to be squeezed

The latest climate models and observations offer unprecedented opportunities to reduce the remaining uncertainties in future climate change, according to a new study.

Although the human impact of recent climate change is now clear, future climate change depends on how much additional greenhouse gas is emitted by humanity and also how sensitive the Earth System is to those emissions.

Reducing uncertainty in the sensitivity of the climate to carbon dioxide emissions is necessary to work-out how much needs to be done to reduce the risk of dangerous climate change, and to meet international climate targets.

The study, which emerged from an intense workshop at the Aspen Global Change Institute in August 2017, explains how new evaluation tools will enable a more complete comparison of models to ground-based and satellite measurements.

Produced by a team of 29 international authors, the study is published in Nature Climate Change.

Lead author Veronika Eyring, of DLR in Germany,said:

“We decided to convene a workshop at the AGCI to discuss how we can make the most of these new opportunties [sic] to take climate model evaluation to the next level”.

The agenda laid-out includes plans to make the increasing number of global climate models which are being developed worldwide, more than the sum of the parts.

One promising approach involves using all the models together to find relationships between the climate variations being observed now and future climate change.

“When considered together, the latest models and observations can significantly reduce uncertainties in key aspects of future climate change”, said workshop co-organiser Professor Peter Cox of the University of Exeter in the UK.

The new paper is motivated by a need to rapidly increase the speed of progress in dealing with climate change. It is now clear that humanity needs to reduce emissions of carbon dioxide very rapidly to avoid crashing through the global warming limits of 1.5oC and 2oC set out in the Paris agreement.

However, adapting to the climate changes that we will experience requires much more detailed information at the regional scale.

“The pieces are now in place for us to make progress on that challenging scientific problem”, explained Veronika Eyring.

From the University of Exeter via press release

The paper:

Taking climate model evaluation to the next level


Earth system models are complex and represent a large number of processes, resulting in a persistent spread across climate projections for a given future scenario. Owing to different model performances against observations and the lack of independence among models, there is now evidence that giving equal weight to each available model projection is suboptimal. This Perspective discusses newly developed tools that facilitate a more rapid and comprehensive evaluation of model simulations with observations, process-based emergent constraints that are a promising way to focus evaluation on the observations most relevant to climate projections, and advanced methods for model weighting. These approaches are needed to distil [sic] the most credible information on regional climate changes, impacts, and risks for stakeholders and policy-makers.

Fig. 1: Annual mean SST error from the CMIP5 multi-model ensemble.

Fig. 2: Schematic diagram of the workflow for CMIP Evaluation Tools running alongside the ESGF.


Fig. 3: Examples of newly developed physical and biogeochemical emergent constraints since the AR5.

Fig. 4: Model skill and independence weights for CMIP5 models evaluated over the contiguous United States/Canada domain.


133 thoughts on “New study attempts to “squeeze out” uncertainty in climate models

    • Don’t say: ” we have a problem Houston. ”

      Although the human impact of recent climate change is now clear,


      future climate change depends on how much additional greenhouse gas is emitted by humanity and also how sensitive the Earth System is to those emissions.

      Oh, so it not actually that clear what the human impact is. Funny , I thought you said it was clear.

      • The whole exercise is only superficially about ‘science’ but rather is 100% about ‘science communication’, i.e. using sciency language that sounds sciency to the avareage Joe or Joanna as a marketing exercise.

        These filth would market Thalidomide if their careers would benefit.

        • Thalidomide was originally marketed in Europe as a sleeping pill. But it is not useless as a medicine. From wikipedia:

          “Thalidomide is used as a first-line treatment in multiple myeloma in combination with dexamethasone or with melphalan and prednisone, to treat acute episodes of erythema nodosum leprosum, and for maintenance therapy. Thalidomide is used off-label in several ways.”

        • This is all about pretending to be honing their skills and computing ability, duping the public into funding them even more while not accomplishing anything real, except using a lot of energy and money.

          Remember, they will never be done because they need job security and thus will always need more funding.

        • After 30 years and tens of billions of dollars, I’m glad we’ve narrowed the human pinkie to footprint down.

          Same as ECS range after 40 years: 1.5 to 4.5 degrees C in 1979 and 2019.

          • My bet for ECS is approx. 0 to 1C/doubling* – that is, NOT a problem.
            Regards, Allan

            Post script: 🙂
            * The asterisk is because CO2 trends LAG temperature trends at all measured time scales, and I am still having difficulty understanding how the future can cause the past.
            Hmm… maybe its Relativity – if the CO2 molecules vibrate really fast, like the speed of light or even faster, can they time-travel? Can I get a grant to study this? Anyone?

          • @ALLAN MACRAE January 7, 2019 at 5:27 pm

            My bet for ECS is approx. 0 to 1C/doubling

            I agree with your upper limit, though based on actual empirical evidence the upper limit is most likely <0.5°C/2xCO2, but I firmly believe ECS could still be negative once all feedbacks are allowed to feedback.

      • And, the first one of those actually means ‘the impact of climate change on humans is now clear’. They used the wrong preposition.

        • Just as SOON as we figure out the actually climate sensitivity of CO2, we’ll KNOW that climate models are still pointless toys, systematically taking minuscule and major errors alike, and multiplying them billions and trillions of times, then run hundreds of different times, producing evidence free output (not climate data), all of which will be objectively wrong, but which nevertheless will be spun by CAGW alarmists into the absolute and objective scientific truth, just as Gavin Schmidt hoped when he said (something like) “hopefully, reality will be somewhere in that spread.”

          Jesu Christo, please don’t allow the future of Western Civilization be decided by such stupidity.

          • But if there are more models, used in the way you describe, will decrease uncetainty. (Do I need a \sarc?)

    • Two or three years ago I found that the Global temperature anomaly is strongly correlated (R^2 = 0.86) to the delayed strength of the Earth’s magnetic dipole
      Ever since I tried to come up with plausible mechanism to explain the above, but failed to define a scientifically credible hypothesis, therefore, since I have no reason to think that the NOAA’s geomagnetic data files are ‘fiddled’ and the nature doesn’t tolerate coincidences, I am (for the time being) of the view that the global temperature anomaly is a ‘work of art’ rather than a product of nature.
      Devising models to match something that could be just an illusion, a mirage, a sort of fatamorgana, it is a total waste of time.

      • It’s all relative. After living around the Houston area for several decades, then moving to Massachusetts and getting to live in 95/95 conditions for weeks at a time with no AC, I’ve come to the conclusion that Hou ain’t that bad. We’re rarely at 95% at midday, more like 60% and that’s sticky but not unbearable.

        I’ve also survived the “dry heat “ of Arizona, FLASH if it’s above your skin temperature it’s hot. Quibbling about “feels like” is wasting time, it’s just HOT!

        • Massachusetts at 95°F and 95% Humidity, for weeks?


          Not even Atlanta, Mobile or Jacksonville have weeks of 95°F and 95% Humidity.

          It is quite easy to trundle on to the big box stores and buy an A/C; including ones that do not need to be hung in a window.

          • I have been in Djibouti, right on the Red Sea. It’s not a “dry” heat (I went in Oct, >90°F max every day, often >100°F, with average 65%RH). After the first trip there I promised that I owed myself a ski vacation… it has been 6 years and another trip there, and I still haven’t gotten that ski vacation!!! And now my flat feet are producing sore ankles so I probably can’t ski anyway! I’m wondering if it would be worthwhile to go anyway just to sit in the lodge and watch?

          • Red94ViperRT10

            I just re-started skiing last weekend, ten years after a horrific injury. I encourage you to give it a try. I had to buy new fitted boots and so on, but it is all worthwhile.

            We all have to surrender to old age, but not too soon.

            I spoke with Felix recently. He was born in Innsbruck and is the best ski tuner anywhere. He is gong back to downhill skiing after an 8-year hiatus. He is 84. 🙂

  1. Progress of some quality, certainly. A monotonic perspective. The lack of correlation between hypothesis and observation is due to incomplete, and clearly insufficient, characterization, and an unwieldy experimental space. Extrapolating/inferring the effectiveness of a mechanism in a lab to the wild has limited utility.

  2. Climate change is predicated on the idea that sample noise is data and that trends in noise have meaning. Recent posts of sea level in Japan to 4 decimal places when the accuracy is not better that a whole digit shows that this technology of using garbage to promote science conclusions is catching on all over.

  3. The logical approach would be to start dropping models that are the worst at matching observations altogether, not smoosh them up with everything else in some insane belief that averaging things known to be wrong somehow arrives at something more accurate.

    But then the dropped models would have a tougher time maintaining their funding. Can’t have that, so smoosh them in and apply some adjustments to compensate for them…

    I have to go puke now.

    • Of course if one networks enough crystal balls together all the uncertainty will disappear completely!

      Just as two ‘wrongs’ do not make a ‘right’, the sum of all fictions does not equal the truth.

    • Not that simple david.

      For example, models that perform better on one set of metrics ( say temperature) can perform
      worse on a different set, say rain

      Yes folks did think about your approach.

      years ago

      • So why does the one that performs better on temperature also perform worse on rain?
        It can’t be because it accurately reflects the physical processes as the physical processes are not independent of each other.
        So why pick a model that happens to follow history for the wrong reasons? You will end up with a Frankenstein’s Monster of many errors.

        Which explains why Climate Science has made no progress in thirty years.
        Remember the uncertainty in Climate Sensitivity has not reduced since AR1. Despite improved computers and observations.
        The approach is flawed. Go back to davidmhoffer’s.

        • “…So why pick a model that happens to follow history for the wrong reasons? You will end up with a Frankenstein’s Monster of many errors…”

          Therein lies the rub.

          Even models which appear reasonably accurate on a global scale when compared to observed data are utterly hogwash on continental and regional scales.

          They are a comedy of errors which sum up to something that looks reasonable, and they are somehow given credibility.

        • MC
          Yes, it is a cardinal ‘sin’ to be right for the wrong reason in science. It might well just be coincidence that a model is right with one parameter. In any event, a metric for evaluation for models should be how well ALL the predictions perform, not just one. It is a strong suggestion that the tuned models are either missing something or are doing things wrong if interrelated physical processes correlate poorly.

      • “Yes folks did think about your approach.

        years ago”

        Great that they thought about it. What did they DO about it?

      • Mosh – as M Courtney already pointed out, if the model does a bad job of anything, it means it is getting the physics WRONG. That the sum of the errors produces a result that matches observations on some things means it has done so on physics that is WRONG, and only arrives at a better match due to other errors or adjustments. Since the underlying physics is WRONG, run it long enough and it will go wildly off kilter.

        • I do not think Mosher’ statement in his comment is wrong.

          Maybe Mosher has to emphasize the difference in meaning between the
          “temperature” and “rain” when it comes to GCM projections, especially when for one of these the case already dialed down from “worst than ever” to “worse” and in the prospect of further reduction in the climate sensitivity it either be considered better or the AGW completely loses the GCM support from climate projections.

          While switching to rain or precipitations or winds can still keep the “worst” around for some longer with a chance to dial it up to “worse than ever”…within the man-made climate paradigm.

          Sorry, if me reading something else than you meant, Steve.


          • But Mosher’s statement supports the strategy on climate modelling that has failed for thirty years.
            A good definition of madness is to repeat the same thing over and over and expect different results.

            Seriously, you cannot study parameters in isolation if they interact. Not when there is no possibility of holding all the other parameters constant while you run the world.

            Mosher’s statement has been proven wrong by logic and by experience. The approach does not work. And it should not be expected to work.
            Why would anyone expect it to work?

          • “One promising approach involves using all the models together to find relationships between the climate variations being observed now and future climate change.”

            So meld the hopeless models together, we will get a better output. Whether it is “more true” remains moot.

          • There are no “… climate variations being observed now …” Other than minor post-LIA warming, with ups and downs, there has been no observed significant change to any climate metric in over 100 years. Get a grip, people; our climate is remarkably stable.

      • So just mush together the ones that do good on rain and exclude those that don’t from the rain projection.
        Then mush together the ones that do good on temperature and exclude those that don’t from the temperature projection.

        It’s not really that complicated, assuming an accurate projection is actually your goal.

        • No. They only do good on rain then.
          There’s no reason to expect them to do good on rain in other times if the reason they do good on rain is wrong.
          And as they do bad on things other than rain the reason they do good on any one thing is probably wrong.
          You may have an accurate projection of your expectations but it won’t be an accurate prediction of the real world.

          • MC Even if you had some modicum of success on temperature, but rain was wrong and you added a coefficient to make it good on both, this is a model waiting to fail massively in the future. This is children doing brain surgery.

            Moreover, if there were ever to be a model that at least could get the sign correct from the signal in the data, the data itself has been and, remarkably, continues to be, adjusted. Any chance of a real discovery being made in this science has been foreclosed on by agenda driven data fiddling. Certainly this has to be major factor in the worsening of model performance with rigid catechismic adherence to a formula cast in stone 40yrs ago.

          • I’m puzzled by this mashing together business. Surely the obvious thing is to work out why the models that are good at rain are good at rain and why models that are good at temperature are good at temperature and then produce a model good at both.

          • Ben Vorlich.
            One main thing you missing there Ben….is that none, while either of, rain or wind or precipitation is global…
            While temps could be “global”…in some aspect considered as such, as per the merit of GCM projections….rain ain’t not it, while also meaning that local or regional ain’t not it too !
            NOT it, AGW…where G stands there clearly for global only, and not regional or local!!

            ECS only can be considered as in the concept of global, not local or regional, like rain or wind or snow…or as per any other claim of later days stupids.

            ECS, or TCS are only terminology, in the consideration of accomplishing a great intended fraud-deception in the line of learning and knowledge,,,,
            where actually no much of any value there these days when it comes to either ECS or TCS,

            ECS simply there to support the concept of CS when it comes to GCMs,
            and TCS only there to support CS when it comes to real data, as CS in both cases, either GCMs or real data, can not make it in its own, unless this further intended deceiving approach considered.

            Meaning that CS, ECS or TCS are simply a big huge bollocks, while when considered at the whole of the spectrum consisting as no more than a very well thought and intended calculated scientific deception, in as per science and knowledge in concept and principle,…. very well objectively intended, very well objectively thought and forcefully objectively applied…with no much regard.

            while…as as per the old rules, whole this affair considered, it stands as the worst of all crimes, that of a “sorcery”…
            very well and objectively thought and very well and objectively intended…
            within whole it’s objective fakery, and intended objective disruption.

            But hey, no much worrying there , as no any one really cares any more of the cardinal crime in the means of the old….either objectively, or subjectively.

            When clearly losing CS, or it’s main supports, ECS and TCS, then the only thing remaining for the sillies, is to brush all of it under the carpet and force it criminally, it,
            their AGW further accommodation and acceptance, as forced by-in the concept of local and regional weather projections, as per means of it all, derived and squeezed out of GCM climate projections…
            into weather GCM projections supporting AGW, at any cost…under any circumstance…while all concept under global as per merit of CS is clearly lost.



      • “For example, models that perform better on one set of metrics ( say temperature) can perform
        worse on a different set, say rain”

        ….then they are total garbage

      • Models that differ in base global temperatures by 3+ degrees C are not modeling the same physics.

        The fact that modeled hindcasts diverge tremendously, modeled late-20th century converge and modeled future diverge tremendously show that the whole thing is an exercise in parameterization to mimic the late-20th Century. Each model has its own physics and cannot be compared to nor combined with any other model.

      • For example, models that perform better on one set of metrics ( say temperature) can perform worse on a different set, say rain

        Then what you have isn’t really a model of any sort. What you have is a lucky paramaterization. A true model, that is actually physics based, would have to follow at least the sign of the variable. Do any of them do that at least, say, 85% of the time?

    • I don’t think any adjustments were made to the models running hot. They are source of the 4.5 degrees per doubling of CO2, yes? Those hot running models are left hot in order to make scary “projections”.


      • The UN IPCC AR5 had to arbitrarily reduce the near-term modeled “projections” because they were running way too “hot.” They did not reduce the out-years because that would hurt the CAGW meme.

        I wonder what they will do with the CMIP6 results in the UN IPCC AR6? From what little I understand, it will be more of the same modelturbation with scary stores about a hothouse earth.

      • When you cool the past…get rid of the MWP…and then hind cast the models to that
        …what would you expect…..exactly what they do

      • Steve, the synod of climate gurus are caught hetween a rock and a hard place. It was pointed out the delta T projections proved to be 300% too high. A 40 year deep cooling period after 1940, when CO2 emissions were accelerating, and an 18 year T Pause after we recovered from the deep cooling, bringing us only back to~ the 1940 high by 1998 meant that all the warming to date actually had occurred prior to 1940 when CO2 wasn’t a factor.

        They set to work feverishly, bending and twisting data to remove this falsifying evidence. They calculated what the facts meant for ECS and realized it was so small that there was nothing to be alarmed about. Then they fiddled the models, tuned them to the grotesquely altered data and doubled down on the alarm. Only now, instead of 3-5C warming from 1950 to 2100, it was only 0.7C more from 1950 (1.5C from 1850!) that was going to have serious impacts!! 2.OC (1.2C from 1950) was to be catastrophic! I have a wager that all-in, we can’t even reach 1.5 if we strived to attain such an increase

  4. Why is it now clear that we need to rapidly reduce carbon dioxide emissions to avoid “crashing” through the 1.5 – 2.0 deg C set out in the Climate Paris agreement?

    • There has never been ANY science that indicates that going over 1.5C will be a problem, much less a disaster.

      Originally, the 1.5C was merely the point at which the world would get back to the temperature the world enjoyed during the MWP. The claim was that above that, we don’t know what would happen.
      Somehow the usual suspects have taken “we don’t know” to “here be dragons”.

      BTW, it’s been warmer than the MWP 3 times in the last 5000 years and the world thrived.
      It’s been warmer than the MWP for some 80 to 90% of the last 10K years and the world thrived.
      So the claim that disaster awaits if we go over 1.5C has already been disproven. By the real world.

      • The last interglacial was about 3C warmer than the current one and the planet and our ancestors survived just fine. Not only has there not been any science to indicate that going over 1.5C is a problem, there’s not even any validated science that can show how doubling CO2 would cause more than 1.1C and most likely far less. Do nothing and the effect of doubling (even tripling) CO2 is already less than 1.5C.

  5. Removing uncertainty from the presumed ECS will not do any good, as the existing uncertainty doesn’t even span the actual ECS which is about 0.3C per W/m^2 and less than the claimed lower limit of 0.4C per W/m^2.

  6. “way to focus evaluation on the observations most relevant to climate projections, and advanced methods for model weighting.”

    Who will and how will they determine which observations are most relevant? Sounds like more gobbledygook to me and another way to rig the data.

  7. 29 authors and the masterly obfuscations and obliqueness of this article are sure signs of a CYA story.

    Bet it does not change the process for AR6.

    • There doesn’t seem to be a simple way to explain the figures without quoting the paper so extensively I would be concerned about copyright issues. The caption to Figure 3 says the two graphs are taken from two papers, both of which have some of the authors of the present paper included:
      53. Sherwood, S. C., Bony, S. & Dufresne, J. L. Spread in model climate sensitivity traced to atmospheric convective mixing. Nature 505, 37–42 (2014)
      61. Wenzel, S., Cox, P. M., Eyring, V. & Friedlingstein, P. Projected land photosynthesis constrained by changes in the seasonal cycle of atmospheric CO2. Nature 538, 499–501 (2016).

      The graph on the left of Fig. 3 is described as showing that there is some correlation between ECS values generated by CMIP models and an index called a lower tropospheric mixing index from the same models, which it says is calculated as the sum of an index for the small-scale component of mixing that is proportional to the differences of temperature and relative humidity between 700 hPa and 850 hPa and an index for the large-scale lower-tropospheric mixing.

      The graph on the right appears to relate CO2 fixed photosynthetically to a measurement of annual increases of CO2 at Point Barrow Alaska.

      Fig 4 caption says it is taken from
      Sanderson, B. M., Wehner, M. & Knutti, R. Skill and independence weighting
      for multi-model assessments. Geosci. Model Dev. 10, 2379–2395 (2017).
      and just displays a particular “skill weighting” procedure.

      The paper looks like a straightforward recitation of the discussion that occurred at the workshop rather than an attempt to demonstrate anything new. “Emergent” appears to mean essentially things they didn’t of before that might be a good idea.

      I have already spent too much time looking at this paper and need to get back to what I was doing now.

    • How do they determine model “skill”?
      By how well the models produce the numbers that the organizers are paying to see?

  8. “Produced by a team of 29 international authors, the study is published in Nature Climate Change.”
    Why does it take 20 or 30 “authors” to do ONE of these “climate studies” ? IIRC , Einstein had one assistant to help him change the world…..hmmmmmmmmm……..

  9. Why is it now clear that we need to rapidly reduce carbon dioxide emissions to avoid “crashing” through the 1.5 – 2.0 deg C set out in the Climate Paris agreement? I didn’t think it was now clear at all. Talk about being in denial. The Climate Paris agreement is dead. They pretend it is still alive.

  10. Amazingly, the only “model” that matches reality is the Russian model…Oh no, Now Mother Nature is “colluding” with Russians…! LOL

  11. Is this before or after we change all the historical data to be colder and all the current temperatures higher? Alternatively I guess we could use raw data from rural stations that have been well sited and maintained. However I think the best thing to do would be to not do the study at all and worry about real problems instead

  12. 29 authors ??
    If she/he were correct, only one would be needed.

    I recall Willis has a rule about the number of authors.

  13. The whole thing smells of more GIGO climate porn. I have yet to see them conduct an engineering evaluation of the models, and the approach of “averaging them together” to get a “better answer” has always smacked of confirmation bias gone wild!

    • The models are adjusted until they give a sensitivity that the modelers think is “about right.” Each modeling group has its own idea of “about right.”

  14. No computer model can calculate the future because the future does not exist as a single point. It is only a probability with a range of possible futures regardless of any action we might take.

    We can influence the future, but even trying to calculate the effect is problematic. Cutting co2 could even make things worse due to unintended consequences .

    Better to maximize economic benefit rather than try and change the weather. That way you have more resources to cope with whatever happens.

  15. This is OT but worth of a mention
    In the hotel ‘New Yorker’, New York, on today’s date, on 7 January 1943 died great American/Serb scientist, inventor and engineer Nikola Tesla (87).
    As it happens today is the orthodox christians’ (and Nikola Tesla’s family) Christmas day, so anyone who might be celebrating today have a happy Holy Day.

  16. They need a method to get rid of the Russian model. It is embarrassing them. Too close to reality for their own good.

  17. Although Pat Frank was never able to get his paper published that I know of his presentation in this video ( ) is clear and reasonable and his conclusions valid. From an uncertainty propagation standpoint models can tell us nothing about the future. Any result will be within the huge uncertainty window and cannot be counted as any more probable than any other.
    Additionally this paper erroneously assumes that human emissions are responsible for all the increase in atmospheric CO2 and thus all the warming from it.

    • He hasn’t been here is a while, but rgbduke had a number of comments making some excellent points on this issue here at WUWT. You can probably search and find them; they are well worth the effort.

  18. If the models are actually worth something then the error covariance between any randomly selected set of models should be random. But if there are enough models you will get two or more of them that trend together, just from chance. Okay, you guys can take it from here …

  19. “Although the human impact of recent climate change is now clear”

    It’s only “clear” if one assumes that all the warming of the last 150 years is due to CO2.

  20. More computing power to calculateepicycles does not help. You need to have better understanding of the weather phenomenons. For that you need more and more accurate observations to constraint the models with all of the variables. Calculating surface temperatures is just a tiny part of the problem.

    Known phenomenon are not enough in a complex adaptive systems. Plate tectonics, volcanic activity, sun, solar system and beyond may still surprise. Long term predictions will be never possible, but modelling might be useful especially in understanding local weather.

  21. Instead of trying to improve the predictive ability of models by combining all the failed ones, doesn’t it make better sense to investigate what the Russian model does differently, producing predictions that are validated by the measurements?

  22. They will need many meetings and work groups and publications to effectively demote the satellite data and promote biased ground station data and other measurement flaws.

  23. I thought they had already reduced the uncertainty in their models by adjusting the observed and historical temperature data via a process they call “homogenization”. That way reality matches their simulations.


  24. Gotta love a graph that uses a skill measurement. I remember reading some review by MM years back that must have used the word skill or skillful several times and I just wanted to go take an antacid.

  25. Any paper which starts,

    “Although the human impact of recent climate change is now clear,”

    Causes me to hit the delete key. It warns there is nothing but agenda based crap to follow.

    Earth system models are complex beyond our understanding but we keep tweaking them randomly because that’s what we are paid to do.

  27. Regarding “Although the human impact of recent climate change is now clear”… when did this happen? Must have missed the research paper, and supporting confirmation studies.

  28. Cargo cultists are building newer and bigger runways and associated bamboo control towers, manned with natives jabbering into coconut shells.
    By all accounts according to their new estimates, those darn cargo planes, filled with wonderous goods, will be landing any day now

  29. “Although the human impact of recent climate change is now clear” — Exactly WHEN are we going to see some QUANTIFICATION of that Statement? Until that happens, all the rest is so much Propaganda and Cow Manure!

  30. “Produced by a team of 29 international authors”

    The quality of any academic paper is inversely proportional to the square of the number of authors. So with 29 authors, you know it is just a gigantic load of bovine excrement.

  31. A tacit admission that all previous models are junk.

    In four years, their new super ‘next level’ model will be revealed to be junk, with the introduction of their new, superduper Stage 3 Climate Model.

    Their lack of self awareness is amusing.

  32. It seems that these “researchers” think that if all the errors are averaged out they will get a more reliable forecast.
    This seems familiar but in a different field.
    The 2008 Crash was the worst since the early 1930s.
    Within this were contrived instruments called “CDOs” –Collateralized Debt Obligations.
    That could only be created with a lot of computer power. Many such obligations were “bundled” into securities.
    In a boom investors abandon caution and seek a higher return. Taking on risk. To diminish concerns about the latter the “bundles” of junk-rated stuff were declared that altogether they would average out as investment grade.
    Not so.
    The Crash was Mother Nature’s way of dealing with delusions in the financial world.
    Mother Nature and Mister Margin right now are messing up theories behind economic intrusion.
    She is about to do it “big time” to the world of climate modelling.

  33. I liked the title of one of the references – “The Art and Science of Climate Model Tuning.”

    Straight out of the abstract: “…Tuning is an essential aspect of climate modeling with its own scientific issues, which is probably not advertised enough outside the community of model developers…”

    Indeed it is “probably not advertised enough” lol.

    • Many thanks, that’s a very interesting and useful paper. Another quote:

      ” Why such a lack of transparency? This may be because tuning is often seen as an unavoidable but dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature. There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.”

      This paper does confirm my worst suspicions about climate models.

      I have defined what I called a “pure” climate model. A pure climate model has these three elements:
      1. The initial conditions.
      2. The physical laws.
      3. Absolutely nothing else.

      As the paper makes clear, pure climate models are completely impossible. As even the IPCC has admitted, it is almost certainly impossible to make long-term climate forecasts due to its chaotic nature. But, putting that aside, a pure climate model would require computers perhaps trillions of times more powerful than anything we have today.

      It seems pretty clear: tuning is basically a sophisticated form of curve fitting. This makes them pretty good at forecasting the past. But it means they are useless at forecasting the future. There’s only one way to test a model’s forecast skills: run the model and then wait 30 years to see how good the forecast was. Of course, we’ve done that: we’ve had modern supercomputer forecasts since the 80’s. And they failed badly, forecasting 2 or 3 times more warming than actually happened. Quite possibly a model that forecast precisely zero warming would have been more accurate.

      The paper said: ” Tuning may be seen indeed as an unspeakable way to compensate for model errors.”

      I see it differently. I see tuning as an unspeakable way to fool the world’s gullible politicians and cause them to squander trillions of dollars trying incompetently to solve a problem that almost certainly does not exist.

    • MJ, I cannot speak to the Art side of the issue, but as for the Science and Engineering side, “tuning” has a long and distinguished career, being instead fondly referred to as introducing a “fudge factor.” According to Wikipedia: “A fudge factor is an ad hoc quantity or element introduced into a calculation, formula or model in order to make it fit observations or expectations. Examples include Einstein’s Cosmological Constant, dark energy, the initial proposals of dark matter and inflation.”

      Interestingly, has this to say about the origin and history of the word “fudge”: “verb, put together clumsily or dishonestly, 1610s, perhaps an alteration of fadge “make suit, fit” (1570s), of unknown origin. As an interjection meaning “lies, nonsense” from 1766; the noun meaning “nonsense” is 1791. It could be a natural extension from the verb.

      But “tuning” sounds so much more professional, doesn’t it; as in tuning a piano to obtain near perfect pitch.

  34. If you don’t know how sensitive the Earth is to greenhouse gases emitted by humans, than the human impact is far from settled or clear. Another case of a logical fallacy from the science is settled crowd.

  35. I know perfectly well what is needed to squeeze out uncertainty from models. I’ve worked a lot with computer modelling (admittedly of much simpler processes than climate), and no model with three or more parameterized variables is any good. The number of possible ways to get a particular result gets too big to get any sensible information out of the model (well not always, if you know the parameterized data is correct within very tight limits, and can afford lots and lots of computer time you can have a few more parameterized variables).

    So, get rid of the parameterizations. Calculate everything from basic physics. In some cases (e g cloud microphysics) that will mean a lot of hard, difficult basic research before you can start modelling. In other cases (e g convection) the physics is well understood, it just needs a computer a few billions or a few trillion times faster to do the physics on a realistic space and time scale. Plus initialization data vastly more detailed than available at present. Oh, and you might also have to solve a number of classic unsolved mathematical problems, like finding a general solution for Navier-Stokes equations.

    So, go for it boys!

  36. What does climate modelling rest upon? An assumption. What does the assumption rest upon? Another assumption. In fact, it is assumptions all the way down!

    The primary difference between this Aspen con/fab and nerdy college kids gathering to play D & D, is that the college kids know their game-world is not real.

  37. At least they seem to recognize that there has been something wrong with the plethora of models they have been using. The reality is that the climate change we have been experiencing is caused by the sun and the oceans over which mankind has no control. Despite the hype, there is no real evidence that CO2 has any effect on climate and there is plenty of scientific rationale to support the idea that the climate sensitivity of CO2 is zero. So they can improve their models by taking out all code that assumes that more CO2 causes warming and eliminate all the models that have come up with wrong results. Instead of a plethora of models they should have just a single model with no fudge factors that adequately predicted the end of the end of the 20th century warming cycle and the current global warming pause.

  38. “Fig. 1: Annual mean SST error from the CMIP5 multi-model ensemble.”

    “Fig. 4: Model skill and independence weights for CMIP5 models evaluated over the contiguous United States/Canada domain.”

    There is a warning one encounters in virtually every financial market, trading desk, etc.;
    “Warning! Past performance is not an indication of future performance!” “Traders and investors buy and sell at their own risk.”

    These fools are weakly admitting their models are trash, but claiming they can fix their models by attending “intense workshop at the Aspen Global Change Institute in August 2017”.

    Utter silliness!

    “Fig. 3: Examples of newly developed physical and biogeochemical emergent constraints since the AR5.”

    Lovely, they want to add complexity.
    A bandaid and lipstick for broken climate models, and excuses for all the alarmist faithful.

    Reducing uncertainty in the sensitivity of the climate to carbon dioxide emissions is necessary”

    Billions and likely trillions of dollars over thirty years and these fools have still not worked out a way or method to test CO₂’s actual influence on atmospheric temperatures. Let alone determine that influence at all locations, altitudes, land or ocean throughout the world.

    ““When considered together, the latest models and observations can significantly reduce uncertainties in key aspects of future climate change”, said workshop co-organiser Professor Peter Cox of the University of Exeter in the UK.”

    There’s the lipstick!

    Defund them!
    Demote any USA researcher who attended via government or taxpayer funds!

    • ATheoK, I admire the chutzpah of Professor Cox for use of the ambiguous word “can” instead of the definitive word “will.”

      Then again, many professors living largely in a university environment get burned when trying to predict the future of the real world.

      BTW, the sentence attributed to Prof. Cox is actually nonsensical as written. The ending portion should have been stated as “. . . uncertainties in key aspects of PREDICTIONS OF future climate change”. That is, models and observations cannot in any way affect key aspects of future climate.

      • Almost agree



        Back in prehistoric HS, English teachers spent inexorable amounts of time drumming into our heads definitive word meanings and ‘when’ said word use was apropos.

        One of these word-pair conundrums was “will/shall”; including repeated references to General Douglas “I shall return” MacArthur.
        The upshot of that lesson from hades is that “will” is nebulous and “shall” is definitive.
        Yes, “will” is a much much firmer word than “can”, but not a definitive.

  39. Anybody who thinks that Warmist Alarmism has ever used high power computers in an efficient manner to do ANYTHING needs to be sterilized and denied children.

    100% of the crap they keep showing us can be done on Lotus 1-2-3 with a 486 SX.

    Hansen did it better in the 1980s with pencil and paper.

  40. Since 40 years, min. 2 generations, thei’re ”
    Taking climate model evaluation to the next level”.

    Wasted lives.

  41. since 40 years, min. 2 generations, grabbing breath

    “It is believed that this is the case, and that it is a process that is becoming increasingly relevant to model projections. These approaches are needed to distil the most credible information on regional climate changes, impacts, and risks for stakeholders and policy-makers.”

    Wasted lives.

  42. If I understand this procedure they are proposing, what they are going to so is take a bunch of disparate models with different assumptions, and collectively having a very wide spread in a range of projected temperatures, precipitation, etc. – then look at the actual temperatures that have been measured and put selective weights in front of the output of each of the models, the weights chosen so that the weighted sum of the models better represents the actual temperatures, precipitation etc. and assume that the weighted group of models better forecasts the future

    Assuming that this is a correct restatement, I would simply propose that we call this novel procedure the “Texas Crapshooter Method.”

  43. “When considered together, the latest models and observations can significantly reduce uncertainties in key aspects of future climate change”

    Allright then, just back off the CO2 thermostat to match observations.

    What? There is no CO2 thermostat in the models? How can it be the control knob in the real world but not in the models?

  44. To err is human; to really foul things up requires a computer.
    – William Edward ‘Bill’ Vaughan, April 1969.

    .. written just before the population bomb / global cooling / global warming / climate change / whatever-it’s-called-now ideology took off by scaring people into thinking that that the world was coming to an end.

    It’s also good to have a huge dollop of charisma in getting that “world is coming to an end” message across.

    Honestly, the Climate Change academic crowd make Charismatic-Pentecostal, hell-fire-and-damnation preachers look like wall flowers at a high school dance.

  45. That Figure 1 is actually rather interesting it shows inter alii

    Climate models can’t handle upwelling

    Climate models can’t handle water/sea ice interaction

    Climate models can’t handle the Antarctic convergence

  46. “Although the human impact of recent climate change is now clear” So how is the impact from “recent” climate change clear? The only clarity I see is no impact. Normal ups and downs of temperatures, storms, etc. These people make this claim without a bit of evidence. They assume we are all aware of numerous articles or reports that offer vague, uncollaborated evidence of fossil fuel induced climate “extremes” and act as if those documents demonstrate it “is now clear”.

Leave a Reply

Your email address will not be published. Required fields are marked *