A Sea-Surface Temperature Picture Worth a Few Hundred Words!

We covered this paper when it was first released, here is some commentary on it – Anthony


Guest essay by By PATRICK J. MICHAELS

On January 7 a paper by Veronika Eyring and 28 coauthors, titled “Taking Climate Model Evaluation to the Next Level” appeared in Nature Climate ChangeNature’s  journal devoted exclusively to this one obviously under-researched subject.

For years, you dear readers have been subject to our railing about the unscientific way in which we forecast this century’s climate: we take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions. Consequently, the daily forecast is usually a blend of a subset of available models, or, perhaps (as can be the case for winter storms) only one might be relied upon.

Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

A map of sea-surface temperature errors calculated when all the models are averaged up shows the problem writ large:

Annual sea-surface temperature error (modelled minus observed) averaged over the current family of climate modelsFrom Eyring et al.

First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed. But, more important, the biggest errors are over some of the most climatically critical places on earth.

Start with the Southern Ocean. The models have almost the entire circumpolar sea too warm, much of it off more than 1.5°C. Down around 60°S (the bottom of the map) water temperatures get down to near 0°C (because of its salinity, sea water freezes at around -2.0°C). Making errors in this range means making errors in ice formation. Further, all the moisture that lies upon Antarctica originates in this ocean, and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

The problem is, down there, the models are making error about massive zones of whiteness, which by their nature absorb very little solar radiation. Where it’s not white, the surface warms up quicker.

(To appreciate that, sit outside on a sunny but calm winters day, changing your khakis from light to dark, the latter being much warmer)

There are two other error fields that merit special attention: the hot blobs off the coasts of western South America and Africa. These are regions where relatively cool water upwells to the surface, driven in large part by the trade winds that blow into the earth’s thermal equator. For not-completely known reasons, these sometimes slow or even reverse, upwelling is suppressed, and the warm anomaly known as El Niño emerges (there is a similar, but much more muted version that sometimes appears off Africa).

There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.

Finally, to beat ever more manfully on the dead horse—averaging up all the models and making a forecast—we again note that of all the models, one, the Russian INM-CM4 has actually tracked the observed climate quite well. It is by far the best of the lot. Eyring et al. also examined the models’ independence from each other—a measure of which are (and which are not) making (or not making) the same systematic errors. And amongst the most independent, not surprisingly, is INM-CM4.

(It’s update, INM-CM5, is slowly being leaked into the literature, but we don’t have the all-important climate sensitivity figures in print yet.)

The Eyring et al. study is a step forward. It brings climate model application into the 20th century.

117 thoughts on “A Sea-Surface Temperature Picture Worth a Few Hundred Words!

  1. It appears to me that models running high, and climate sensitivity due to doubling CO2 ranging up to 4.5 degrees are needed by those wanting to issue scary projections.

    SR

  2. If be interested to see if any models are based on an ECS of .5 and see how much closer to observation they become. If so, then maybe they could stop adjusting the temperature data.

    Not holding breath. We all know they need wiggle room to allow for sophistry when the predictions constantly fail.

      • Of course they can, and do! They simply go back and change the input data (and forget where they put it) then see where the model comes out. If they don’t think it ends up as what they want – or the sponsors desire – they wash, rinse, and repeat.

      • Latitude.

        It does not really matter, no one can change the reality.
        The most ever achieved, will be like producing a lag in the perception of it all, no more no less.

        cheers

    • Or even if they were using the Lewis and Curry estimate for ECS, which IIRC is about 1.5 degrees. Which ECS did the Russian model use?

      • 1.35 Is Lewis I think. Highest of the ECS estimates based upon observations.
        Lindzen had what, 0.5 ?

        And Lewis and Curry likely used mostly the pre-adjusted data, apart from clouds etc, so that already has warming backed in.

      • The russian model has an ECS of 2.1. Its does USE an ECS of 2.1

        ECS is a EMERGENT property of a model

      • Shit

        The russian model has an ECS of 2.1. it DOES NOT use an ECS of this value., it HAS an ECS of this value. ECS is an emergent property of models.

          • Indeed but Mosher is right. The models are not supposed to contain in-hidden ECS, but they are supposed to be written to simulate the atmosphere.

            In reality, it is difficult to know if the model was developed using some ECS in mind.

          • The different models are constructed and tuned to give an ECS that the different modeling groups think is “about right.” They all converge in the late 20th Century period, proving the power of aerosol assumptions.

            Mash them all together and you get an ECS of over 3. Real atmospheric data shows that to be greatly exaggerated.

        • Hi Everyone

          Mosher is right on this point. If you are a newbie to the topic, research what he means. The ECS is the result of running the model based on a number of physical principles and constants as inputs.

          The failure of all but one model to do a reasonable job would normally mean a lot of firings and project closures so evidently normal rules don’t apply to climate modellers.

          It is quite possible to have a “high” setting for particle cooling and GHG warming and “match reality well” for a time so don’t place bets on any of them.

          But the ECS is not a model input.

          • But tweaking a model to get a particular ECS is an input, Crispin. Even the modelers admit that they tweak their models and parameterize up the wazoo to get an ECS that “seems about right” to them. They use greater assumed aerosols to balance out the hot ones.

            Proof that it is all modelturbation? They all converge on the “tuning period” of the late 20th Century, but wildly diverge in hindcasts and forecasts.

            Until they all clean up their acts and fully document their programming, processes, parameterization and assumptions, I don’t believe they are sufficient to change our society, economy and energy systems. Chew on that, true believers.

    • As I understand it, ECS is not an input in the climate models (if it was, you could do the whole thing in 10 minutes on a pocket calculator, supercomputers not needed!). Rather, the ECS is derived from the results of modelling.

        • Which it means, that it does not even exist…as far as the experiment concerned.

          Oh only as made up concept to deceive, objectively meant.

          cheers

        • “One can imagine changing a parameter which is known to affect the sensitivity, keeping this parameter and the ECS both within the anticipated acceptable range, and retuning the model otherwise with the same strategy toward the same targets”.

          –Frederic Hourdin, “The Art and Science of Climate Model Tuning”, BAMS, 2017.

          “Its climate sensitivity…had shot up from 3.5degC in the old version to 7degC, an implausibly high jump.
          MPIM hadn’t tuned for sensitivity before–it was a point of pride–but they had to get that number down.”

          Paul Voosen, “Scientists Open Up their Black Boxes to Scrutiny”.Science, 28 October 2016

          So yes it is an emergent property, but it is the scientist, not the science, that determines what the “anticipated acceptable range” is.

          • Patrick.

            If I am not wrong, which in this case I may very well be.
            The case that you mention, it consisted with a change of core parametrising
            of the model.
            Even then the 7degC versus 3.5, was simply an assumed conclusion due to 3.5C achieved
            2X faster.
            If I am not too wrong with this, there was no any 7C result from the GCM simulation.
            A way of showing how wrongly CS is considered in the case of GCMs.

            The GCM simulation in this case does ~same ECS as most others do, but because it gets to what this guys call climate equilibrium much faster and probably in lower CO2 concentration, it is considered that the CS value in this case is higher by a factor of 2…
            kinda of a messy scientific bollocks, if you get my point.

            cheers

    • problem with .5 is that you would fail all paelo sims.
      cant get out of a iceball earth.

      next, is its not clear what you would change in physics to get .5

        • It could be UNICORNS yes.

          Same wthing with evolution. You look at the facts and yes it looks like evolution is a good explanation, But you ACNT RULE OUT ALIENS!! directing evolution… or even god directing it

          Yup, the nature of science is that all theory is underdetermined by the evidence.

          • I doubt all doors have been closed. It is very difficult rule out things, especially unknown unknowns, even if we happen to suffer from the hybris of ‘modern science knows everything about paleo climates’.

            The evolution you put in here is a cheap shot. Stop that.

          • I love the way Mosher goes to extremes. If it’s not CO2, then it’s unicorns.
            Why not deal with actual arguments, rather than thrashing strawmen?

        • Something like once most of the water is covered by ice, snow stops. So that the ash being generated by volcanoes builds up on the ice, melting it.

      • That does not appear to make much sense, given that we enter ice ages when CO2 is at its highest (presumably accompanied by high water vapour, if feedbacks are to be believed), and exit ice ages when CO2 is at its lowest (accompanied by low levels of water vapour).

        • Refreshing Richard Verney. When I read quotes from smart people (and try not to judge the source), I sometimes get lost – just accepting their statement as reasonable.
          What Stephen Mosher said sounded reasonable. But then his statements presumes that CO2 MUST be correlated and measurable to some significant extent… Unfortunately, as you so quickly show, the answer is no to that premise.
          It might be correlated and measurable or it might not be.

      • The problem with the paleo-sims versus the paleo-recons is that CO2 always lags temperature in the recons. In the sims, it is causal. It is a fundamental model error that no level of parameter or aerosol tweaking can fix.

        • In the Vostoc cores, peak CO2 was never able to maintain peak temperature, in fact, peak CO2 always led to a decline in temperature.

          • In the Vostoc cores, peak CO2 was never able to maintain peak temperature; in fact, peak CO2 WAS CAUSED BY temperature BECAUSE CO2 always LAGGED TEMPERATURE IN TIME.

            CO2 TRENDS LAG TEMPERATURE TRENDS AT ALL MEASURED TIME SCALES.
            – by hundreds of years in the ice core record;
            – by ~9 months in the modern data record.

            REFERENCES:

            CARBON DIOXIDE IN NOT THE PRIMARY CAUSE OF GLOBAL WARMING: THE FUTURE CAN NOT CAUSE THE PAST
            by Allan MacRae
            http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

            http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14

            THE PHASE RELATION BETWEEN ATMOSPHERIC CARBON DIOXIDE AND GLOBAL TEMPERATURE
            by Ole Humlum, Kjell Stordahl, Jan-Erik Solheim
            Global and Planetary Change, Volume 100, January 2013, Pages 51-69
            https://www.sciencedirect.com/science/article/pii/S0921818112001658

          • Your graphic from woodfortrees does not show which time series variable is dependent, and which is independent.

            Furthermore taking the derivative of a time series removes the trend. Therefore you are comparing apples to oranges in your woodfortrees graphic.

          • JPP,
            there really is no dispute that CO2 lags temperature in the paleo reconstructions. Even in the last 150 years, since the end of LIA in 1850-1870, CO2 didn’t start moving up significantly until about 1950.

            The climateers then argue that CO2 is a positive feedback that drives temps higher. They model that nonsense… ad nauseum. in supercomputer GCM after supercomputer GCM, all tweaked to provide results they “expect”. And verified against nothing but other models.

            But then of course that defies logic since we know climate is stable within bounds of paleo-reconstructed temperatures and this… negative feedback must be stronger than any +tive feedbacks.

            The clear logical conclusion is that whatever feedbacks occur from CO2, water vapor feedbacks (both some + and some more -) ultimately converge to a strong negative feedback to counteract any CO2 GH effect.

            Mainstream climate science has taken a dead-end path to Climate Change alarmism due to a political bias (and rent seeking) wanting something that isn’t true in nature.
            The Left needs to accept the failure of its failed clisci and move on to its next scare tactic.

          • Further comments on MacRae 2008 and Humlum et al 2013, referenced above.

            I generally agree with the first three conclusions from Humlum 2013, as follows:
            1– Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
            2– Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
            3– Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.

            Points 2 and 3 are similar to my 2008 conclusions.

            Critiques of Humlum failed to refute the three conclusions above. In general, I regard all the critiques of these three conclusions as specious nonsense, which tend to obfuscate the clear observations in these papers.

            One hint: It is not necessary that ALL the increase in atmospheric CO2 is due to temperature – part of the CO2 increase can be due to other causes such as fossil fuel combustion, deforestation, etc., but part of it is clearly due to temperature – and that part demonstrates that CO2 trends lag, and do not lead temperature trends in the modern data record, and that observation DISPROVES the CAGW hypothesis.

            Another highly credible disproof of the CAGW meme is that fossil fuel consumption accelerated strongly after 1940 as did atmospheric CO2 concentrations, but global temperatures COOLED from ~1945 to 1977, warmed for over a decade, and then were relatively constant since – so the correlation with increasing atmospheric CO2 was NEGATIVE, POSITIVE AND NEAR-ZERO. To claim that atmospheric CO2 is the “control knob” for global temperature is a bold falsehood, that is refuted by observations at all measured time scales.

            Regards, Allan

      • “next, is its not clear what you would change in physics to get .5”

        The climate boffins have already changed the laws of physics to get 1.5-4.5.

        Change them back using actual real physics, you would get close to zero.

      • Steven is assuming sensitivity is a fixed value. I suspect it is variable and temperature dependent among other things. When you realize this is likely then you can have one sensitivity during glacial periods and a completely different one during interglacials.

        In addition, sensitivity can be different even within those periods and probably varies during the day.

        • Not really. Not assuming that at all, in fact it probably is not.
          But for practical purposes I stick with my bet

      • Mosher You are assuming that the convection(Navier Stokes equations) and the radiative transfer equations of the models are coupled together. Since the actual physics of convection and condensation has never been nailed down(EX: how does the latent heat released from condensation actually get out to space?) there is no actual correct physics that the models can use. So they have to wing it by parameterizing simpler processes. The other thing you are assuming is that CO2 drives temperature. Your assumption fails with the paleo records of ice ages. Global warming is a farce and you know it. I can’t understand why someone of your intelligence hangs on to this disgraceful climate science meme.

        • Steve M disgracefully hangs on because it pays the bills!! Just like so many of the others they are following the money.

      • “cant get out of a iceball earth”

        1. No proof there ever was an “iceball earth”

        2. And if there was there are other mechanisms than CO2 for getting out of it (e. g. a LIP eruption or a major Bolide strike to lower albedo)

      • Or if put in another way;
        you will be failed by all proper GCM sims, in regard to the concept of
        ECS,
        when it, the ECS, is a generic derivative concept from-of GCM simulation outputs,
        which according to my understanding stands in the range of 2.5 to 3.1 C…as per sims range.

        Sims do not support any ECS value below or above that range, as far as I can tell…
        by the very sims that dictate the path to a consideration of CS in the extrapolated
        new meaning as a ECS.

        So, as far as I can tell, a .5 is simply a mathematical extrapolated meaningless value, not suported by sims,
        when considered as an ECS value, in the context of a new supposed climate EQUILIBRIUM.

        🙂

        cheers

  3. “There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.”

    Yet the central El Nino region and the North Atlantic are forecast too cool, so maybe they were expecting more positive Arctic and North Atlantic Oscillation conditions than has occurred.

  4. A new study will show that the model averages were right all along… it was the data that needed fixing.

      • Skeptics should stick with the data. CO2 rise follows temperature rise after a lag of several hundred years.

        Most of the CO2 that we see is due to the temperature recovery from the Little Ice Age  (LIA) with a lag of 300 years . Coincidentally,  a much smaller amount is being added by humans.  It’s an accident that warmists, the IPCC and their much-amplified propaganda machine have taken advantage of.

  5. The two warm water blobs at the west side of South America and Africa are
    the warm waters, which emerge out of the ocean depth. And there, in the
    depth, the warm waters, as you all remember, was the place with the
    “missing heat hiding”.
    Therefore, the two blobs demonstrate how, by circulation, the missing
    heat comes out of its hide to the surface nowadays. This is, what all models
    account for. Obviously, the Russian model forgot the missing heat and
    therefore produced lower values than all other models.

    • J.Seifert writes: “The two warm water blobs at the west side of South America and Africa are
      the warm waters, which emerge out of the ocean depth. And there, in the
      depth, the warm waters, as you all remember, was the place with the
      “missing heat hiding”.”

      I do not think that’s correct. At least with regard to South America equatorial regions. The upwelling is always cooler (when it’s upwelling) (ENSO neutral to La Nina). The chart shows RED because the models showed the upwelling was less cool than measured… hence too warm.

        • I’ve been wondering about household toilet tanks. I think that’s where the missing heat might be hiding. Cue the music from “jaws” as we move in on an image of someone’s hand about to flush.
          We need urgent legislation to stop people from flushing their toilets.
          Does anybody have Alexandra Ocasio-Cortez’s contact info?

          The planet is cooling. How much longer will we have to listen to this AGW garbage before we can get to the public stoning for Mann, Hansen and the rest?

          • john

            We need urgent legislation to stop people from flushing their toilets.

            California (in their panic and excess population from millions of illegal aliens, resident taxpayers and welfare receivers), had already many local and state regulations and run near-continuous advertisement campaigns to “Not flush” (“If it’s brown, flush it down.” was one I recall. Though which Brown they wanted flushed was, admittedly, as clear as the waste in the toilet.)

  6. As I and many others have pointed out, averaging a bunch of anything does not necessarily increase accuracy and precision. People get used to doing that in their undergraduate courses and it works under certain circumstances but it relies on criteria that most professors don’t explicitly state. I would go so far as to say that, for physical data, the technique almost always doesn’t work in the expected manner.

    • So true. A normally distributed random variable is assumed for most undergraduate work. Scary to think that these guys don’t understand that.

      • funny.
        yes

        Models that are different by 3+ degrees C in their base global temperatures are not modeling the same physics.

        • Maybe they’re subtracting the temperature at the Earth’s apogee from the temperature at the Earth’s perigee.

        • HUH?

          The nominal temperature of the earth is 288 K
          Some simulate it and come up with 289.5K
          Some get around 286.5 K

          That is scary good

    • 20th century instead of 21st…sly!

      As far as INM-CM5 goes…https://www.earth-syst-dynam.net/9/1235/2018/ , the paragraph that begins just below figure 6:

      “…One of the most intriguing observed features of ongoing climate changes is the fast summer Arctic sea ice extent decrease in the beginning of the 21st century. The ensemble of CMIP5 models underestimates the rate of decrease in Arctic summer ice area by a factor of 2. INMCM4 participated in CMIP5 and also significantly underestimates the extent of Arctic sea ice decrease (Volodin el al., 2013). In newly obtained INM-CM5 data (Fig. 7) we qualitatively see the same behavior of the Arctic sea ice as the average rate of sea ice loss is underestimated by a factor of 2 to 3. However, in one model run (purple) the magnitude of decrease is similar to the one in the observations (reduction from 7–7.5 million km2 in the 1980s to 4–5.5 million km2 in the 2000s). In other runs Arctic sea ice loss is underestimated by a factor of 1.5–3, and in one run (green) one can even see some increase in Arctic sea ice area during the last decades. Our results suggest that the rapid decrease in Arctic sea ice extent near year 2000 was partially induced by external forcing; however, the role of internal variability can be very important (the range of the sea ice extent year-to-year variability could be estimated as 3.0 million km2)…”

      If the CMIP5 ensemble members weren’t underestimating Arctic ice loss by a factor of 2, they would be running even hotter.

      Also of note is that only one of the seven INM-CM5 runs did an adequate job modeling Arctic ice loss

  7. Nice work Patrick!

    ‘Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

    Gosh its been like 10 years fighting against the democracy of models ( as gavin calls it)

    It’s long been thought that equal weighting was sub optimal, however the question has always been
    how do you improve and justify the weighting.

    Best model

    http://berkeleyearth.org/wp-content/uploads/2015/02/figure38-inmcm4-vs-berkeley-earth.png

    • It really took 10 yrs to answer the question, “how do you improve and justify the weighting?”

      What a bunch of nincompoops.

    • “Democracy is a pathetic belief in the collective wisdom of individual ignorance.”- H. L. Mencken

  8. Worth looking at the NOAA sea surface anomaly as it is now. The Southern Ocean south of Western Australia has been anomalously cool for three years now, nothing like that picture above.
    https://www.ospo.noaa.gov/Products/ocean/sst/anomaly/
    As for the amazing stupid idea that averaging erroneous models gives you a ‘true’ model, I am dumbfounded that any sane person with a sound grasp of simulations and associated errors could believe that.

  9. In a previous life 40 years ago when I was working on a team analyzing Chinese government reported economic data we soon came to realize that we were attempting double-precision arithmetic operations against wildly estimated data. All of our laboriously calculated trends and future projections were effectively totally worthless for any kind of rational policy recommendations to our customers.

    Seems sort of what we are doing today with trying to analyze climatic data. Doesn’t matter how carefully you massage it the base data is crap so any results are crap. Oops.

    Oh, and my friends still looking at Chinese economic data assure me it is still pretty much worthless because nobody in the government wants to report “real” economic data and risk exposing the truly shaky state of the provincial governments and the pernicious effects of the shadow economy.

    • If anyone wants to know what is happening in China go and have a look. I was totally amazed at the levels of poverty and the harshness with which people treat each other (no government bodies needed for this)Some of the most cold hearted people I have ever encountered. In Xi-an I gave up me seat for a person who looked about 80 and every other person just sat and watched this person struggling. Chongqing is even worse.

      • That is one of the worst features of communism, that it pits people against each other.
        In capitalism, those who are able to cooperate the best, succeed. Under communism, success is determined by who is best able to brown nose those above him.

  10. “and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

    They do it for for nonexistent climate change. Climate change “science” is no longer a scientific endeavor. It long ago (like with AR2 and Ben Santer’s dishonest stunt that would have in any actual scientific field gotten him sanctioned and dismissed, after that Mann’s hockey stick dishonesty was easy) passed into the political realm, where consensus matters and contrary findings are ignored.

    What I keep wondering is how, if the science is settled on CO2 and climate sensitivity, how these climate charlatans can keep demanding research grants to study their scam and then keep getting funded. Inquiring minds want to know. I’m in Lindzen camp where all of academic and government ‘clisci’ needs a 90% funding cut across the board, with modellers first.

  11. “there is now evidence that giving equal weight to each available model projection is suboptimal.”

    I think I hear and echo: Mine. Mine. . . . Mine x 29

    Ricola

  12. Heard a quote the other day when dealing with hydraulic models, “All models are wrong, but some are useful”

    • That’s true of all models, not just hydraulic ones. Even in aerospace we will wind tunnel aspects that are frequently missed in modeling. They didn’t do that for the F35 and that caused some of their problem with fuel line cracks and fires. They assumed the F16 model was close enough for engine flight stressing.

    • It depends on what purpose the “some” are used. If they are used as an attack on our society, economy and energy systems, I don’t like their “use.”

  13. “First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed.”
    It might be oft repeated, but if said like this, sloppily. The real issue that people want to make is that the model trends are higher than observed. And this has nothing to do with the SST figure shown. People seem very incurious about what that figure is, and the article is of no help there. It isn’t actually a Eyring et al result; they just quote it from a 2015 paper (Fig 1) by Richter. It isn’t of SST at any particular time; it is the deviation of the long term average, for both Earth and models.

    SST’s aren’t a particularly good test of GCMs. The reason is that GCMs are primarily air and ocean models, coupled. The coupling is through a boundary layer model where things happen on a scale of mm to metres. Temperature has a steep gradient here, and it is possible that models could get that gradient wrong in places without spoiling the global approximation. That would happen if the turbulence model in that thin boundary layer was inaccurate, for example.

    • “…it is possible that models could get that gradient wrong in places without spoiling the global approximation…”

      Well that’s really analogous to how the GCM results typically are. They’re wrong in place-after-place but get the global approximation close enough through the balancing of errors that they look “reasonable.”

  14. “[W]e take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions.”

    The problem is that, since you need to have at least several decades worth of data to even begin to discern “climate” from “weather,” the very process of empirically testing how well a climate model is at predicting changes in climate isn’t practical. Thus, climate scientists simply take an average of the models, not because it makes any scientific or logical sense to do so, but out of desperation; they have no data to distinguish one model’s accuracy from that of another model so the “averaging” process treats all equally and just puts up a veneer or facade that some kind of “scientific” expertise is being applied, for the terminally gullible to fall for. But in truth, the fact that they have to blindly average a bunch of disparate models tells you that the climate scientists have no real idea of how or why the climate changes.

    • That is baked into the design of the IPCC. They don’t want to understand climate. That was never the goal. The point was to find a hook to give the UN control of the world’s energy supplies and give them unlimited power. They did this by making the IPCC only consider man-made changes in climate. Their mandate was to find man guilty and enable the execution of the western civilization.

      • That is the key issue in climate science( sic) they (the UN) decided at an early stage, to ignore the science and just stick with the numbers that gave the right political answer.
        It would be a catastrophe, for the control movement, or UN as it has become known recently, if the reality of climate change being normal and not alarming, was allowed to be absorbed by the population at large.
        What would the de-industrialists i.e Greens do, if their go to scare story was taken off them?

    • Tuning to the late 20th Century gives wildly varying model hindcasts and forecasts. That’s because the different modelers create and tweak their models to get the ECS they want. Producing CAGW seems to be the dominant driver for most of the modelers.

  15. Says Nick Stokes: “and it is possible that models could get that gradient wrong in places without spoiling the global approximation”

    Hmmmm so, as long as you like the results, fundamental errors do not matter, just bit of bad luck but still usable!!!

  16. From the paper:

    In addition, it has been demonstrated that CMIP models are not independent. Most inferences in the literature about model interdependence are derived from error correlation [13,79]. This cannot identify the specific model components that are interdependent. Identification of these common components is a difficult task due to the large number of models involved in CMIP and lack of detailed information regarding individual model versions.

    13. Sanderson, B. M., Knutti, R. and Caldwell, P. Addressing interdependency in a multimodel ensemble by interpolation of model properties. J. Clim. 28, 5150–5170 (2015).

    79. Sanderson, B. M., Knutti, R. and Caldwell, P. A representative democracy to reduce interdependency in a multimodel ensemble. J. Clim. 28, 5171–5194 (2015).

    [ bold by edh and edited to change the literature citations from superscripts to [ ], and change the ampersand in the references to ‘and’ ]

    Lack of complete and correct documentation to an appropriate detail level, both external and internal to the code source, is a defining characteristic of 20th century software that is considered to be especially unworthy for critical applications. When in the past 15 years this has been stated to be a significant deficiency in Climate Science software, it has always been mocked and rejected by Climate Scientists. Now it has appeared in a Peer-Reviewed Paper written by True Climate Scienists, and published in a Certified Climate Science Journal.

    Climate Science continues to present results from Black Box software. Climate Science is the single area of software applications that are used to guide public policy in which this state is allowed to exist.

  17. Patrick:

    Is there a plot available showing just observed – INM-CM4 ?

    Thank you.

  18. I notice the huge warming errors around NY and CA where the errors are most useful for large public transportation funding needs/appeals.

  19. “There’s a current FALSE theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one.”

    It’s not “El Niños that contributes to atmospheric warming”

    but the sun rising in the east doing a long ride over the Pacific to sink in the west *”.

    * of course it’s the earth rolling under the sun west east but no one uses that phrase.

Comments are closed.