Study: High End Model Climate Sensitivities Not Supported by Paleo Evidence

Guest essay by Eric Worrall

University of Michigan researchers have done the unthinkable, and checked climate model predictions against available paleo-climate data to see if the predictions are plausible.

Some of the latest climate models provide unrealistically high projections of future warming

Date:April 30, 2020Source:University of MichiganSummary:A new study from climate researchers concludes that some of the latest-generation climate models may be overly sensitive to carbon dioxide increases and therefore project future warming that is unrealistically high.

A new study from University of Michigan climate researchers concludes that some of the latest-generation climate models may be overly sensitive to carbon dioxide increases and therefore project future warming that is unrealistically high.

In a letter scheduled for publication April 30 in the journal Nature Climate Change, the researchers say that projections from one of the leading models, known as CESM2, are not supported by geological evidence from a previous warming period roughly 50 million years ago.

The researchers used the CESM2 model to simulate temperatures during the Early Eocene, a time when rainforests thrived in the tropics of the New World, according to fossil evidence.

But the CESM2 model projected Early Eocene land temperatures exceeding 55 degrees Celsius (131 F) in the tropics, which is much higher than the temperature tolerance of plant photosynthesis — conflicting with the fossil evidence. On average across the globe, the model projected surface temperatures at least 6 C (11 F) warmer than estimates based on geological evidence.

“Some of the newest models used to make future predictions may be too sensitive to increases in atmospheric carbon dioxide and thus predict too much warming,” said U-M’s Chris Poulsen, a professor in the U-M Department of Earth and Environmental Sciences and one of the study’s three authors.

Our study implies that CESM2’s climate sensitivity of 5.3 C is likely too high. This means that its prediction of future warming under a high-CO2 scenario would be too high as well,” said Zhu, first author of the Nature Climate Change letter.

Read more: https://www.sciencedaily.com/releases/2020/04/200430113003.htm

People underestimate the power of models. Observational evidence is not very useful” – attributed to John Mitchell, UK MET

Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different; the modelling process itself frequently seems to be accepted as evidence that the climate model is correct, a circular chain of reasoning which leads to positions which outside of climate science would be considered absurd.

Let us hope this novel protocol of testing climate models against available evidence catches on.

The paywalled study is available here.

88 thoughts on “Study: High End Model Climate Sensitivities Not Supported by Paleo Evidence

  1. Obviously those University of Michigan climate researchers are in the pay of big oil and the evil Koch Brothers. We must immediately band together and demand they be fired and that foul useless university that doesn’t get real science must be immediately closed. Think of the children!

    • They were kidnapped by the ultra right wing Michigan Militia, and forced to write a summary that followed the real evidence. But even more hideous, since they were all vegans they were forced to eat a smoked pork shoulder, with a side of mashed potatoes drenched in home-style gravy. The video started to circulate on YouTube, but was quickly scrubbed.

      I couldn’t be makin’ this kinda stuff up, could I?

        • I saw that video, before it was scrubbed, and the reason it was scrubbed is not the reason you might think. What it showed was the Michigan-Militia-kidnapped vegans ravenously devouring the pork shoulder, mashed potatoes and home-style gravy, which caused them to question everything they ever believe in, converting on-the-spot to meat-eating rebels that the Google empire could not allow to get out any further.

          I don’t make this stuff up either. This should be of great comfort to all the media folks that do.

        • I was too slow…

          “This video has been removed for violating YouTube’s Terms of Service.”

        • Do you remember the title…? Maybe it’s on BitChute.

          There’s a lesson in here, folks….

  2. it’s not a bug it’s a feature…

    ..like when they take the IPCC’s worst implausible case and run with it

    or when UHI adds 5 degrees….they adjust down 1/2 degree….and claim adjustments lower temperature

  3. An ECS over 5 degrees C per doubling is plainly preposterous on its face, which of course would make it a popular claim in the zany, whacky, evidence-free world of consensus “climate science”, so notoriously unhinged from objective reality.

    The GIGO model was perpetrated by UCAR:

    http://www.cesm.ucar.edu/models/cesm2/

    UNIVERSITY CORPORATION FOR ATMOSPHERIC RESEARCH
    SERVING THE EARTH SYSTEM SCIENCE COMMUNITY

    UCAR manages the National Center for Atmospheric Research on behalf of the National Science Foundation.

    The NSF would be well advised to find a new manager.

  4. Whhaaat? . . . the “scientists” that created these massive, supercomputer-based climate models didn’t ever think of just validating them against previous paleoclimatology DATA??? It took an independent organization to do this work for them?

    On second thought, I suspect they did exactly that—for at least one or two test cases—and simply did not like the results so they ignored such in order to declare modeling success, as was necessary to secure their current and future funding.

    • There are no climate models – just computer games that predict what their programmers want predicted. Personal opinions and beliefs disguised as complex math and science. Predictions will always be for lots of global warming — there is no attempt to make accurate predictions, except for one Russian model. The models are props to make scary wild guess always wrong predictions seem like real science.

      • What’s surprising about that? Years ago, an Oxford mathematician and WUWT contributor named Mike Jonas replicated the climate models and used the result to correlate geological temperature records with carbon dioxide levels. He found that CO2 accounted for about 10% of the earth’s temperature record.

  5. Good posting, Eric. Often when I put my Geologist hand lens on and begin explaining what the geologic record tells us about natural variation, I get a blank look that means “models”! Not models like from computers, models like with chest measurements bigger than IQ’s. That blank look is my cue to stop trying and change the subject to something more appropriate, like tofu. Stay sane and safe (quarantine modified, walked with dogs!).

    • As I have followed climate change issues over the past several years, it seems as if those with a background in the science of geology are some of the greatest skeptics of CAGW. Perhaps its because the study of geology requires a serious study of the past to understand the present, rather than abstract theory to predict the future. In any event that is my non-peer-reviewed conclusions of my anecdotal observations.

      • Mr. Kelley and Mr. Long. Based upon my personal observations of graduates from the University of Iowa within the last 10-15 years on behalf of said graduates I am obligated to state “Ok Boomer”. 😉

      • Don’t count us out – the economists.

        We use the same – as in SAME – maths in Macro forecasting, and projections beyond the near-term suck just as bad as theirs will (count on it). But Micro works pretty well – completely different models. Thing is – everybody in Econ admits it … having taken the beatings in stride, for the most part. (OK, not everybody)

        Now if we could just go back and change the data … hmmm ….

  6. There is no skill to predict forward or estimate backward. However, forecasts are pretty reliable in a limited frame of reference, say one week, maybe. Here’s to our system remaining semi-stable, computationally manageable, and tolerable.

  7. Word is (Geophysical Research Letters 3 Jan 2020) that 27 of the CMIP6 models are running hotter than for AR5, with 10 having ECS above 4.5! This is NOT good news for AR6, because it increases the ECS discrepancy from models to observational energy budget methods that AR5 could not paper over.
    One would have thought the reverse, since the CMIP6 30 year hindcast parameter tuning incorporates more of the pause than did CMIP5. Guest posted on the hindcast attribution problem previously.

    The GRL paper says because of a higher cloud feedback. Might be true in the models, but cannot be true in reality. Makes little sense based on AR5’s cloud discussion and the earlier observational fact that clear sky/all sky satellite data suggests cloud feedback is about zero. Dessler’s 2010 paper first showed this although he incorrectly claimed otherwise based on a laughable r^2 of 0.02.

    • Rud, It can be no surprise that those models have just got to run all the hotter now that an early century pause delayed their seemingly inexorable reach up toward a predicted ‘alarming’ level. Like distance runners too slow off the starting block, those modeled rates of change must find their ‘second wind’ to appear to be in the running for the desired prize at the 2100 finish line.

      This is another case of desperate doubling down rather than the unthinkable acknowledgment of a much less personally rewarding reality. And the dreaded humility of the repentance involved in favoring the truth at this point ironically poses an existential threat to their own identity that is on a par with that vital crisis they tried to foist upon the rest of us. Thus: hubris uber alles!

      • “Rud, It can be no surprise that those models have just got to run all the hotter now that an early century pause delayed their seemingly inexorable reach up toward a predicted ‘alarming’ level.”

        I think you put your finger on it, Doc.

        It looks like the IPCC is trying to *will* CO2 to make the atmosphere warmer.

        This Human-Caused Climate Change scandal just keeps getting bigger and more outlandish.

    • “Dessler’s 2010 paper first showed this although he incorrectly claimed otherwise based on a laughable r^2 of 0.02.”

      Wait. This can’t be the r² of statistical correlation fame. A value of 0.02 for that r² would be far beyond laughable, well into Loony Tunes territory. I can probably get a higher r² for a sneeze.

  8. How dare they contradict the science !

    And by the way, we live at the present time, not 50 millions years ago
    when Friday For Future didn’t even exist !

    pff

  9. I look forward to the BBC and Grauniad reports that they are doubtless working diligently on as we speak.

    • The media reports will consist of red herrings, false analogies, and ad hominems. They have batches of those.

  10. More data substantiating the obvious, but unfortunatly, no amount seems to be sufficient to falsify the IPCC’s absurdly overestimated ECS. The reason is simple. The actual ECS is too low to support the existence of the IPCC. They chose a value upon their inception that was large enough to justify their creation and then canonized it as ‘settled science’ to insure their continuation. They will never accept the truth as it means their dissolution.

    • They choose a wide range for ECS so that when the science finally is settled and everyone agrees on a low ECS, they can still claim that they weren’t wrong since a low ECS is still within their range.
      In the mean time they claim that they have to work using the high end, because they have to think of the children.

      • And yet the actual ECS is below the lower limit of their presumed range! Apparently, +/- 50% uncertainty isn’t enough for their ‘settled science’ to even be technically correct.

        The data couldn’t be any more clear that the average effect on the surface from each of the 240 W/m^2 from the Sun is about 1.62 W/m^2 of NET surface emissions with less than 5% variability from year to year. The next average W/m^2 of any kind of forcing will do the same and this increase in surface emissions corresponds to a temperature increase of about 0.3C +/- 0.02C whose upper limit is still 18% less than the IPCC’s lower limit of about 0.4C

  11. But the CESM2 model projected Early Eocene land temperatures exceeding 55 degrees Celsius (131 F) in the tropics, which is much higher than the temperature tolerance of plant photosynthesis — conflicting with the fossil evidence.

    The world was a different place 50 million years ago. In particular, the Isthmus of Panama had not yet closed. link If you were going to apply a model, you would have to account for the radically different ocean currents. On the other hand, are the GCMs even capable of modelling ocean currents based on raw physics and geometry?

    One way or another, the conflict between the models and the proxy evidence highlights shortcomings in the models.

    • Are you sure about that, Commie? (ROTFL great handle)

      I thought Climate Models were net energy balance, CO2 being “global” and all that … while plenty of models work out guesses (sorry) predictions on AMOC and such, the overall “equilibrium temperature) models are “energy in, energy out.” Roy Spencer had a good primer on this posted here, as I recall….

  12. Any model that bases future climate on CO2 will always be wrong…CO2 follows temperature, not the other way around. The fundamental basis of all of these models begins with CO2 causes warming. Of COURSE they are all wrong!

    • Take a look at the Youtube videos of Potholer54, who goes to amazing lengths to try to explain this Inconvenient Truth away.

      • Potholer is too concerned with perfecting his posh accent to worry about trivial things like reality.

    • “The fundamental basis of all of these models begins with CO2 causes warming. Of COURSE they are all wrong!”

      Yes, they got the basic foundation of the current Human-Caused Climate Change hoax wrong, and everything that follows from that is also wrong. All the thousands of studies based on this flawed foundation are wrong. All the expensive CO2 mitigation efforts are wrong because they are unnecessary.

      Yet, the IPCC keeps pushing these lies onto the public as if they were facts. We will keep pushing back. 🙂

  13. “Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different; the modelling process itself frequently seems to be accepted as evidence that the climate model is correct”

    Yes sir. A logic in reverse that leads to the weirdness that agw theory becomes the null hypothesis.

    https://tambonthongchai.com/2019/02/03/hidden-hand/

    • “Yes sir. A logic in reverse that leads to the weirdness that agw theory becomes the null hypothesis.”

      Good point. The IPCC has turned science upside down.

      It should be: Mother Nature (natural variability) is what causes the changes in the Earth’s weather, until proven otherwise.

      The IPCC (wrong) way: CO2 is what causes the changes in Earth’s weather, until proven otherwise.

      The IPCC has not shown that CO2 causes *any* changes in Earth’s weather, much less that CO2 causes all the changes. This is science by assertion. It’s made the IPCC and its Spawn a lot of money so far. I imagine they will continue this hoax for as long as possible.

      Human-Caused Climate Change: The Biggest Hoax in Human History.

  14. More heresy from French paleoclimatologists, this time studying the Cretaceous Hothouse. Their low estimate of CO2 concentration is questionable, however, especially for hottest middle of the period. IMO, plant food level was at least 1000ppm throughout the period, with a high of perhaps 2000 and average of 1700ppm.

    CO2 and temperature decoupling at the million-year scale during the Cretaceous Greenhouse

    https://www.nature.com/articles/s41598-017-08234-0

    CO2 is considered the main greenhouse gas involved in the current global warming and the primary driver of temperature throughout Earth’s history. However, the soundness of this relationship across time scales and during different climate states of the Earth remains uncertain. Here we explore how CO2 and temperature are related in the framework of a Greenhouse climate state of the Earth. We reconstruct the long-term evolution of atmospheric CO2 concentration (pCO2) throughout the Cretaceous from the carbon isotope compositions of the fossil conifer Frenelopsis. We show that pCO2 was in the range of ca. 150–650 ppm during the Barremian–Santonian interval, far less than what is usually considered for the mid Cretaceous. Comparison with available temperature records suggest that although CO2 may have been a main driver of temperature and primary production at kyr or smaller scales, it was a long-term consequence of the climate-biological system, being decoupled or even showing inverse trends with temperature, at Myr scales. Our analysis indicates that the relationship between CO2 and temperature is time scale-dependent at least during Greenhouse climate states of the Earth and that primary productivity is a key factor to consider in both past and future analyses of the climate system.

    • It’s quite clear the Earth’s temperature is controlled by the vast deep ocean’s and the water vapor those oceans transfer to the atmosphere that then convects to move energy vertically. Thus water vapor and the depressed adiabatic lapse rate it creates is the most abundant and consequential GHG.

      • Yep. I live on the edge of the tropics. Whenever it gets really hot a thunderstorm forms, and afterwards its a few degrees cooler.

        • However, as for GCMs, clouds do not compute. They are “parameterized”. IOW, whatever GIGO modelers want or need them to be.

          “Consensus climate science” is not just unscientific, but seeply antiscientific.

        • Eric
          Not just the tropics. When I lived in Vermont, I came to expect a thunderstorm every Summer afternoon. While the locals joked that Summer comes on July 4th, and leaves on July 5th, it really is a few days longer than that.

  15. “Back testing” and “hind casting.” That’s like… you know… science?

    Given a choice twixt climate prediction and a conjure bag I’d opt for a rolling of the bones.

  16. It takes unrealistically high CO2 levels to derive apparent Cretaceous temperatures, using GCMs. No problem! Just raise the ECS estimate.

    Besides hot seas (and perhaps because of them), the mid-Cretaceous was also probably relatively cloudless.

    This is from 2002, but the problem persists.

    Possible atmospheric CO2 extremes of the Middle Cretaceous (late Albian–Turonian)

    https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2002PA000778

    Atmospheric carbon dioxide (CO2) estimates for the Middle Cretaceous (MK) have a range of >4000 ppm, which presents considerable uncertainty in understanding the possible causes of warmth for this interval. This paper examines the problem of MK greenhouse forcing from an inverse perspective: we estimate upper ocean water temperatures from oxygen isotope measurements of well‐preserved late Albian–Turonian planktonic foraminifera and compare these against temperatures predicted by general circulation model (GCM) experiments with CO2 concentrations of 500–7500 ppm. At least 4500 ppm CO2 is required to match maximum temperatures inferred from well‐preserved planktonic foraminifera. Approximately 900 ppm CO2 produces a good match between the model and the minimum temperature estimates for the MK. An ocean model forced by these two extremes in surface conditions brackets nearly all available bottom water temperature estimates for this interval. The climate model results support nearly the entire range of MK CO2 estimates from proxy data. The ocean model suggests possible MK oceanographic changes from deep water formation in the high latitude region of one hemisphere to the other hemisphere in response to changes in atmospheric temperatures and hydrologic cycle strength. We suggest that, rather than contradicting one another, the various proxy CO2 techniques (especially those with high temporal resolution) may capture true variability in CO2 concentrations and that MK CO2 could have varied by several thousand ppm through this interval.

  17. We’ve been hearing for some time that the AR6 group of models predict more warming than the AR5s. Obviously, the doomsday message hasn’t been getting across properly because the industrialized countries aren’t making enough effort to be fossil fuel free, so the predictions just have to become more dire.

    Never mind that the more warming the models predict, the sooner they will become so disconnected from reality that temperature adjustments won’t be able to keep up with them, and their errors will become transparently obvious. The green gangsters are getting panicky and they’re going for broke.

    Perhaps AR6 will be their last roll of the dice. We can hope.

  18. “People underestimate the power of models. Observational evidence is not very useful” –

    Oh models are quite useful when your purpose is political and not science. The early COVID-19 models were quite useful in crashing the world economy and starting a Great Depression. How else is the UN and globalists going to get rid of the existential threat Donald Trump represents to globalism and UN hegemony, but to kill the US economy and sour the voters to him by November.

    • Well, surface station “observations” indeed aren’t very useful, except to show that, cooked to a crisp though they be, they still don’t jibe with GIGO GCMs.

      And how can you validate models without reference to past observations.

      Consensus “climate science” has passed through the postmodern looking glass.

    • “Oh models are quite useful when your purpose is political and not science. The early COVID-19 models were quite useful in crashing the world economy and starting a Great Depression.”

      Which computer model are you talking about? The University of Washington computer model projected an initial figure of 100,000 dead from Wuhan virus if mitigation actions were taken. The U.S. is currently nearing 70,000 deaths and the dying isn’t over yet, so the University of Washington virus computer model looks like it is in the ballpark and getting more accurate by the day, unfortunately for the innocents that have to suffer.

      I guess one of these days all those who have been trashing the virus computer models are going to have to admit they were wrong. Or will they? How can they not when the figures are the figures?

      And Btw, I mentioned earlier that a couple of people involved with the Univerisity of Washington virus computer models, a Dr. Murray and another man whose name I didnt catch, both said that their new estimate was a total of 72,000 dead by Aug. 4, 2020.

      I said at the time that this figure seemed a little low and I didn’t understand why they were making this prediction. I think it is clear the U.S. will exceed 72,000 dead within the next two weeks, not the next three months, so it looks like in this case the people at the Univeristy of Washington are estimating too low. Don’t ask me why. The trend and numbers look pretty obvious.

      Here’s a link to the story:

      https://twitter.com/CNN/status/1256041007138791425

      “An influential coronavirus model is projecting over 72,000 coronavirus deaths in the US by early August.

      Dr. Christopher Murray, who leads the team that did the modeling, explains why the latest projection has moved higher.”

      end excerpt

      We had 66, 000 dead yesterday. Probably 68,000 today. Getting close to 72,000. I don’t know where this figure came from, but it doesn’t negate the fact that this University of Washington computer model predicted a low-range value of 100,000 dead with mitigation and we are getting close to that number.

      The closer we get, the more nervous the virus computer model bashers will get.

      I heard some poor, misguided person on tv this morning talking about how the Wuhan virus isn’t any more dangerous than the normal flu. She believed that because that is what she has been told by equally misguided individuals. She was saying this to promote getting back to work.

      Lots of people are misguided by the attack on the virus computer models, too. Much ado about nothing. But that “much ado” has misled the public and might cause some of them to make unwise decisions like dismissing the seriousness of the Wuhan virus because they have been led to believe their leaders cannot be trusted because the virus computer models are claimed to be bogus.

      I guess I’ll stop there.

  19. 50% of the CMIP5 ensemble models have been out of range for 21years vs observations. They should be dropped but that would destroy the dangerous warming meme.

  20. I will be interested to learn how INM CM5 does. INM CM4 was the only one in CMIP5 that projected temperatures about ‘right’. INM has been publishing their upgrades and upgrade tests, but I haven’t found any full run stuff yet.

  21. How do any of these models reconcile with observed temperatures since 1970 ? My understanding is the observed warming is way below the models. As temperatures rise doesn’t the sensitivity of temperature to co2 decrease.

  22. I’m sure that this new analysis will be reflected in the next round of IPCC reports/sarc.
    Yes, no virus can stop the IPCC’s work, even if those international junkets have had to be replaced with virtual meetings.
    The next round of reports being developed for AR6: https://www.ipcc.ch/report/sixth-assessment-report-cycle/
    The Physical Science Basis, April 2021
    Mitigation of Clmate Change, July 2021
    Impacts, Adaptation and Vulnerability, October 2021
    Synthesis Report: Climate Change 2022, June 2022
    For a climate emergency, they seem to dawdle along. Isn’t the science settled? I’m sure Greta could write the report for them by next week.

    • Except for the very lowest-end, ie closest to observed reality, Russian model. All the dozens of others should be tossed. But of course they won’t be, because CLIMATE CHANGE EMERGENCY!!!

  23. I have read of a couple of ‘hindcasting’ analyses that have demonstrated the, at best, ‘shortcomings’ in the usual climate models. For example:
    https://www.washingtonexaminer.com/opinion/op-eds/the-great-failure-of-the-climate-models
    I recall another from a few years ago – I think it was reported on Jo Nova’s blog – but I can’t find it and don’t appear to have kept a copy of it. But, when searching for it, I found this in my archive:

    In a 2009 interview with FlightGlobal, the late former Boeing 747 chief engineer, Joe Sutter, cautioned about reliance on computer-assisted design tools in aircraft development. “There should not be an over-emphasis on what computers tell you, because they only tell you what you tell them to tell you,” he said.

    https://www.flightglobal.com/opinion/opinion-what-aircraft-designers-should-learn-from-joe-sutter/121608.article
    Very droll!

  24. Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different;

    I remember seeing an episode of Modern Marvels on the History Channel years ago.
    In the episode they did a thing on a car company using computer models to design bumpers that would stand up to a crash.
    When they got a model with the desired results, they didn’t put into production.
    They built prototype bumpers, and crashed them, and then compared the results of the computer model vs reality.
    The computer models saved them money in design and research but their output wasn’t mistaken for reality. How the prototype performed was the reality that production would be based on.

  25. I am doing a bit with SPICE, the electronic circuit modelling program. I am always impressed at how well it predicts the DC voltage levels in transistor circuits. Some computer models mostly work.

    • peterg
      As a general rule of thumb, properly constructed deterministic models provide results that are superior to stochastic models. The problem is, not all systems lend themselves to deterministic formulations.

    • DC is pretty easy, there are no frequency related terms and you don’t even need calculus to solve. Now do AC circuits, especially at rf frequencies. When you get the circuit you want, and actually build it, the fun begins. Parasitics, varying permeability, etc. Models can only get you to a starting point, not the final solution.

      • Yes Jim Gorman,
        It is the tool we have and, as you say is a starting place.
        Then later RF circuits can become unstable again when silly environmental matters like (and sometimes only small) changes in temperature, vibration, dust accumulation, and damp interfere with all those carefully constructed mathematical parameters of the components, cables, connectors, and the PCB board materials.

  26. Some of the latest climate models provide unrealistically high projections of future warming

    That statement implies that some climate models provide realistically high projections of future warming.

    How does anyone decide that some projection is realistically high? Is there a committee that decides? A consensus of climate casuists?

    How does one judge any given projection is realistic at all? There is no adequate physical theory of the climate available to make any estimate.

    Critical thinkers are at a premium in consensus climatology. One may analogize that the bloody-minded CO2-monotheists have driven off all the free-thinking heretics.

    These people live in a science-abusive cloud-cuckoo land.

  27. There are 4 major points climate modellers need to consider very seriously:
    1. The equatorial speed of Earth’s rotation makes all gases to circulate through the atmosphere vertically and act as surface cooling agents;
    2. CO2 in polar ice cores is lower than in the tropics as snow itself has minimal CO2 if any, which means that the air trapped in fir is from the warmer seasons only and give a false impression if taken as an annual value;
    3. Variations in Earth’s orbits and orientations with respect to the sun vary all the time as to affect the sun’s illumination of the poles.
    4. Anthropogenic CO2 additions to the trillions of tonnes circulating through the atmosphere and sequestrated annually by photosynthesis and by precipitation (as H2CO3) cannot be cummulative and if anything would have a net global cooling effect.

    • There is one more thing about Co2…
      It is ALL emitted at the surface, and it is ALL absorbed at the surface. It has a higher concentration at the surface for those very reasons. To call it ‘well mixed’, is plain wrong. Most of it is at the surface because the atmosphere is thickest at the surface. So, well-mixed is a misnomer. It doesn’t ‘rain’ Co2, and Co2 isn’t lighter than the other major constituents of the atmosphere – in fact, it is heavier, which is why it settles and puts fires out. So, to claim that there is some ‘blanket’ of CO2 ‘out there’ that traps heat is plain disingenuous, or is claimed by those that have no clue about gases.

  28. “University of Michigan researchers have done the unthinkable, and checked climate model predictions against available paleo-climate data to see if the predictions are plausible.”

    It is done all the time.

    One problem is this.

    1. your paleo “data” is not readily comparable. so yu have to turn it into temperatures. you know
    trying rings into Temperature. This is a MODEL.
    2. Your GCM puts out data. This is not observational data. It too is a MODEL.

    So you compare the two models of temperatures. One statistical derived from proxies. One physical derived from first principles.

    Comparing two models is hard. Long ago I was sitting in an AGU session that was discussing how to compare models with paelo data. The question was raised. If the paelo model of rainfall doesnt match the GCM model of rainfall, Which is correct?

    Anyway, Comparing models to paleo data is nothing new. There is a whole project devoted to it
    PMIP

    https://pmip.lsce.ipsl.fr/

    I do wish authors of posts would check the actual activities that people engage in.

    https://pmip.lsce.ipsl.fr/about_us/history

    The Paleoclimate Modelling Intercomparison Project (PMIP) emerged from two parallel endeavours. During the 1980s, the Cooperative Holocene Mapping Project showed the utility of combining model simulations and syntheses of paleoenvironmental data to analyse the mechanisms of climate change. At the same time, the climate-modelling community was becoming increasingly aware that responses to changes in forcing were model dependent. The need to investigate this phenomenon led to the establishment of the Atmospheric Modelling Intercomparison Project (AMIP) – the first of a plethora of model intercomparison projects of which PMIP (and CMIP1) are part.

    The specific aim of PMIP was, and continues to be, to provide a mechanism for coordinating paleoclimate modelling and model-evaluation activities to understand the mechanisms of climate change and the role of climate feedbacks. To facilitate model evaluation, PMIP has actively fostered paleodata synthesis and the development of benchmark datasets for model evaluation. During its initial phase (PMIP1), the project focused on atmosphere-only general circulation models; comparisons of coupled ocean-atmosphere and ocean-atmosphere-vegetation models were the focus of PMIP2.

    In PMIP3, project members are running the CMIP5 paleoclimate simulations and will lead the evaluation of these simulations. However, PMIP3 will also run experiments for non-CMIP5 time periods and will be coordinating the analysis and exploitation of transient simulations across intervals of rapid climate change in the past. PMIP also provides an umbrella for model intercomparison projects focusing on specific times in the past, such as the Pliocene Modelling Intercomparison Project (PlioMIP), or on particular aspects of the paleoclimate system, such as the Paleo Carbon Modelling Intercomparison Project (PCMIP).

    PMIP membership is open to all paleoclimatologists, and we actively encourage the use of archived simulations and data products for model diagnosis or to investigate the causes and impacts of past climate changes.

    Quoted from Braconnot et al, “Evaluation of climate models using palaeoclimatic data”, Nature Climate Change 2, 417-424 (2012), doi:10.1038/nclimate1456

  29. High sensitivity has always been the Achilles heel of climate models, with the only surprise being climate sceptics haven’t challenged alarmists projected warming claims anywhere near strongly enough.

    Geological records show higher CO2 in the past yet there was no ‘runaway warming’. Have the laws of physics changed or something? Erm…nope. That nails the ‘tipping points’ / positive feedbacks / amplification / high forcing nonsense. Neither proxy data nor today’s observations – if it was true we wouldn’t be here now, support the theory, so ‘runaway warming’ is falsified right there.

    Way past time AGW nonsense was put to bed forever.

  30. “Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different; ”

    Models are validated against a SPECIFICATION.

    For example, My specification can be “model the temperature of X to within 2K of the measured K
    A model that exceeded 2K of the actual would fail validation. One that was off by 1.9K would pass
    validation.

    And sometimes the spec is rather easy to hit. The model shall be more skillful than a naive forecast.

    • Models are validated against a SPECIFICATION.

      That’s interesting. Is this regardless of industry?

  31. This falsification of CESM2 will probably find modelers tweaking their model to make CESM3, a model which will be equal CESM2 in BS.

Comments are closed.