Study: High End Model Climate Sensitivities Not Supported by Paleo Evidence

Guest essay by Eric Worrall

University of Michigan researchers have done the unthinkable, and checked climate model predictions against available paleo-climate data to see if the predictions are plausible.

Some of the latest climate models provide unrealistically high projections of future warming

Date:April 30, 2020Source:University of MichiganSummary:A new study from climate researchers concludes that some of the latest-generation climate models may be overly sensitive to carbon dioxide increases and therefore project future warming that is unrealistically high.

A new study from University of Michigan climate researchers concludes that some of the latest-generation climate models may be overly sensitive to carbon dioxide increases and therefore project future warming that is unrealistically high.

In a letter scheduled for publication April 30 in the journal Nature Climate Change, the researchers say that projections from one of the leading models, known as CESM2, are not supported by geological evidence from a previous warming period roughly 50 million years ago.

The researchers used the CESM2 model to simulate temperatures during the Early Eocene, a time when rainforests thrived in the tropics of the New World, according to fossil evidence.

But the CESM2 model projected Early Eocene land temperatures exceeding 55 degrees Celsius (131 F) in the tropics, which is much higher than the temperature tolerance of plant photosynthesis — conflicting with the fossil evidence. On average across the globe, the model projected surface temperatures at least 6 C (11 F) warmer than estimates based on geological evidence.

“Some of the newest models used to make future predictions may be too sensitive to increases in atmospheric carbon dioxide and thus predict too much warming,” said U-M’s Chris Poulsen, a professor in the U-M Department of Earth and Environmental Sciences and one of the study’s three authors.

Our study implies that CESM2’s climate sensitivity of 5.3 C is likely too high. This means that its prediction of future warming under a high-CO2 scenario would be too high as well,” said Zhu, first author of the Nature Climate Change letter.

Read more: https://www.sciencedaily.com/releases/2020/04/200430113003.htm

People underestimate the power of models. Observational evidence is not very useful” – attributed to John Mitchell, UK MET

Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different; the modelling process itself frequently seems to be accepted as evidence that the climate model is correct, a circular chain of reasoning which leads to positions which outside of climate science would be considered absurd.

Let us hope this novel protocol of testing climate models against available evidence catches on.

The paywalled study is available here.

0 0 votes
Article Rating
88 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Natalie Gordon
May 2, 2020 10:05 am

Obviously those University of Michigan climate researchers are in the pay of big oil and the evil Koch Brothers. We must immediately band together and demand they be fired and that foul useless university that doesn’t get real science must be immediately closed. Think of the children!

mark from the midwest
Reply to  Natalie Gordon
May 2, 2020 10:42 am

They were kidnapped by the ultra right wing Michigan Militia, and forced to write a summary that followed the real evidence. But even more hideous, since they were all vegans they were forced to eat a smoked pork shoulder, with a side of mashed potatoes drenched in home-style gravy. The video started to circulate on YouTube, but was quickly scrubbed.

I couldn’t be makin’ this kinda stuff up, could I?

Scissor
Reply to  mark from the midwest
May 2, 2020 12:44 pm

Here’s the video.

https://youtu.be/4gTMQ9TthIo

Reply to  Scissor
May 2, 2020 1:37 pm

I saw that video, before it was scrubbed, and the reason it was scrubbed is not the reason you might think. What it showed was the Michigan-Militia-kidnapped vegans ravenously devouring the pork shoulder, mashed potatoes and home-style gravy, which caused them to question everything they ever believe in, converting on-the-spot to meat-eating rebels that the Google empire could not allow to get out any further.

I don’t make this stuff up either. This should be of great comfort to all the media folks that do.

Jeff Meyer
Reply to  Scissor
May 2, 2020 4:54 pm

It has been removed??? I least when I checked at 16:54 PST.

Reply to  Scissor
May 2, 2020 5:12 pm

I was too slow…

“This video has been removed for violating YouTube’s Terms of Service.”

JSMill
Reply to  Scissor
May 3, 2020 11:39 am

Do you remember the title…? Maybe it’s on BitChute.

There’s a lesson in here, folks….

Scissor
Reply to  Natalie Gordon
May 2, 2020 12:40 pm

That would be the evil Koch empire, since David Koch passed away last summer.

MarkW
Reply to  Scissor
May 2, 2020 2:29 pm

“If you strike me down, I shall become more powerful than you can possibly imagine.”

Clyde Spencer
Reply to  MarkW
May 2, 2020 8:44 pm

MarkW
That’s what all martyrs say!

Latitude
May 2, 2020 10:07 am

it’s not a bug it’s a feature…

..like when they take the IPCC’s worst implausible case and run with it

or when UHI adds 5 degrees….they adjust down 1/2 degree….and claim adjustments lower temperature

Rob_Dawg
Reply to  Latitude
May 2, 2020 12:34 pm

RCP8.5 is busted. Any current projection that uses it is invalid.

John Tillman
May 2, 2020 10:12 am

An ECS over 5 degrees C per doubling is plainly preposterous on its face, which of course would make it a popular claim in the zany, whacky, evidence-free world of consensus “climate science”, so notoriously unhinged from objective reality.

The GIGO model was perpetrated by UCAR:

http://www.cesm.ucar.edu/models/cesm2/

UNIVERSITY CORPORATION FOR ATMOSPHERIC RESEARCH
SERVING THE EARTH SYSTEM SCIENCE COMMUNITY

UCAR manages the National Center for Atmospheric Research on behalf of the National Science Foundation.

The NSF would be well advised to find a new manager.

May 2, 2020 10:35 am

Whhaaat? . . . the “scientists” that created these massive, supercomputer-based climate models didn’t ever think of just validating them against previous paleoclimatology DATA??? It took an independent organization to do this work for them?

On second thought, I suspect they did exactly that—for at least one or two test cases—and simply did not like the results so they ignored such in order to declare modeling success, as was necessary to secure their current and future funding.

Krishna Gans
Reply to  Gordon Dressler
May 2, 2020 10:41 am

They don’t know anything about hindcast 😀
Believe only in forecasts 😀

Reply to  Gordon Dressler
May 2, 2020 3:28 pm

There are no climate models – just computer games that predict what their programmers want predicted. Personal opinions and beliefs disguised as complex math and science. Predictions will always be for lots of global warming — there is no attempt to make accurate predictions, except for one Russian model. The models are props to make scary wild guess always wrong predictions seem like real science.

Trebla
Reply to  Richard Greene
May 5, 2020 3:40 am

What’s surprising about that? Years ago, an Oxford mathematician and WUWT contributor named Mike Jonas replicated the climate models and used the result to correlate geological temperature records with carbon dioxide levels. He found that CO2 accounted for about 10% of the earth’s temperature record.

Ron Long
May 2, 2020 10:54 am

Good posting, Eric. Often when I put my Geologist hand lens on and begin explaining what the geologic record tells us about natural variation, I get a blank look that means “models”! Not models like from computers, models like with chest measurements bigger than IQ’s. That blank look is my cue to stop trying and change the subject to something more appropriate, like tofu. Stay sane and safe (quarantine modified, walked with dogs!).

Dennis Kelley
Reply to  Ron Long
May 2, 2020 11:27 am

As I have followed climate change issues over the past several years, it seems as if those with a background in the science of geology are some of the greatest skeptics of CAGW. Perhaps its because the study of geology requires a serious study of the past to understand the present, rather than abstract theory to predict the future. In any event that is my non-peer-reviewed conclusions of my anecdotal observations.

Doug
Reply to  Dennis Kelley
May 2, 2020 5:37 pm

Mr. Kelley and Mr. Long. Based upon my personal observations of graduates from the University of Iowa within the last 10-15 years on behalf of said graduates I am obligated to state “Ok Boomer”. 😉

JSMill
Reply to  Dennis Kelley
May 3, 2020 11:49 am

Don’t count us out – the economists.

We use the same – as in SAME – maths in Macro forecasting, and projections beyond the near-term suck just as bad as theirs will (count on it). But Micro works pretty well – completely different models. Thing is – everybody in Econ admits it … having taken the beatings in stride, for the most part. (OK, not everybody)

Now if we could just go back and change the data … hmmm ….

n.n
May 2, 2020 10:54 am

There is no skill to predict forward or estimate backward. However, forecasts are pretty reliable in a limited frame of reference, say one week, maybe. Here’s to our system remaining semi-stable, computationally manageable, and tolerable.

Rud Istvan
May 2, 2020 10:54 am

Word is (Geophysical Research Letters 3 Jan 2020) that 27 of the CMIP6 models are running hotter than for AR5, with 10 having ECS above 4.5! This is NOT good news for AR6, because it increases the ECS discrepancy from models to observational energy budget methods that AR5 could not paper over.
One would have thought the reverse, since the CMIP6 30 year hindcast parameter tuning incorporates more of the pause than did CMIP5. Guest posted on the hindcast attribution problem previously.

The GRL paper says because of a higher cloud feedback. Might be true in the models, but cannot be true in reality. Makes little sense based on AR5’s cloud discussion and the earlier observational fact that clear sky/all sky satellite data suggests cloud feedback is about zero. Dessler’s 2010 paper first showed this although he incorrectly claimed otherwise based on a laughable r^2 of 0.02.

Doc Chuck
Reply to  Rud Istvan
May 2, 2020 3:25 pm

Rud, It can be no surprise that those models have just got to run all the hotter now that an early century pause delayed their seemingly inexorable reach up toward a predicted ‘alarming’ level. Like distance runners too slow off the starting block, those modeled rates of change must find their ‘second wind’ to appear to be in the running for the desired prize at the 2100 finish line.

This is another case of desperate doubling down rather than the unthinkable acknowledgment of a much less personally rewarding reality. And the dreaded humility of the repentance involved in favoring the truth at this point ironically poses an existential threat to their own identity that is on a par with that vital crisis they tried to foist upon the rest of us. Thus: hubris uber alles!

Tom Abbott
Reply to  Doc Chuck
May 3, 2020 5:20 am

“Rud, It can be no surprise that those models have just got to run all the hotter now that an early century pause delayed their seemingly inexorable reach up toward a predicted ‘alarming’ level.”

I think you put your finger on it, Doc.

It looks like the IPCC is trying to *will* CO2 to make the atmosphere warmer.

This Human-Caused Climate Change scandal just keeps getting bigger and more outlandish.

jorgekafkazar
Reply to  Rud Istvan
May 2, 2020 3:41 pm

“Dessler’s 2010 paper first showed this although he incorrectly claimed otherwise based on a laughable r^2 of 0.02.”

Wait. This can’t be the r² of statistical correlation fame. A value of 0.02 for that r² would be far beyond laughable, well into Loony Tunes territory. I can probably get a higher r² for a sneeze.

David S
May 2, 2020 11:00 am

It’s dead (comma) Jim. Otherwise Jim is dead, but he’s not wearing a red shirt.

May 2, 2020 11:03 am

How dare they contradict the science !

And by the way, we live at the present time, not 50 millions years ago
when Friday For Future didn’t even exist !

pff

May 2, 2020 11:22 am

I look forward to the BBC and Grauniad reports that they are doubtless working diligently on as we speak.

jorgekafkazar
Reply to  Right-Handed Shark
May 2, 2020 3:59 pm

The media reports will consist of red herrings, false analogies, and ad hominems. They have batches of those.

Michael Jankowski
May 2, 2020 11:30 am

https://www.resilience.org/stories/2019-08-13/new-models-point-to-more-global-warming-than-expected/
“…What scares us is not that the CESM2 ECS is wrong…but that it might be right…”

http://www.cesm.ucar.edu/events/workshops/ws.2019/presentations/amwg/gettelman.pdf
“…CESM2 is Skillful: the best CCSM/CESM model ever
All those other models: so sad, total disaster…”

May 2, 2020 11:37 am

More data substantiating the obvious, but unfortunatly, no amount seems to be sufficient to falsify the IPCC’s absurdly overestimated ECS. The reason is simple. The actual ECS is too low to support the existence of the IPCC. They chose a value upon their inception that was large enough to justify their creation and then canonized it as ‘settled science’ to insure their continuation. They will never accept the truth as it means their dissolution.

MarkW
Reply to  co2isnotevil
May 2, 2020 2:33 pm

They choose a wide range for ECS so that when the science finally is settled and everyone agrees on a low ECS, they can still claim that they weren’t wrong since a low ECS is still within their range.
In the mean time they claim that they have to work using the high end, because they have to think of the children.

Reply to  MarkW
May 2, 2020 8:29 pm

And yet the actual ECS is below the lower limit of their presumed range! Apparently, +/- 50% uncertainty isn’t enough for their ‘settled science’ to even be technically correct.

The data couldn’t be any more clear that the average effect on the surface from each of the 240 W/m^2 from the Sun is about 1.62 W/m^2 of NET surface emissions with less than 5% variability from year to year. The next average W/m^2 of any kind of forcing will do the same and this increase in surface emissions corresponds to a temperature increase of about 0.3C +/- 0.02C whose upper limit is still 18% less than the IPCC’s lower limit of about 0.4C

commieBob
May 2, 2020 11:38 am

But the CESM2 model projected Early Eocene land temperatures exceeding 55 degrees Celsius (131 F) in the tropics, which is much higher than the temperature tolerance of plant photosynthesis — conflicting with the fossil evidence.

The world was a different place 50 million years ago. In particular, the Isthmus of Panama had not yet closed. link If you were going to apply a model, you would have to account for the radically different ocean currents. On the other hand, are the GCMs even capable of modelling ocean currents based on raw physics and geometry?

One way or another, the conflict between the models and the proxy evidence highlights shortcomings in the models.

JSMill
Reply to  commieBob
May 3, 2020 11:55 am

Are you sure about that, Commie? (ROTFL great handle)

I thought Climate Models were net energy balance, CO2 being “global” and all that … while plenty of models work out guesses (sorry) predictions on AMOC and such, the overall “equilibrium temperature) models are “energy in, energy out.” Roy Spencer had a good primer on this posted here, as I recall….

John Shotsky
May 2, 2020 11:50 am

Any model that bases future climate on CO2 will always be wrong…CO2 follows temperature, not the other way around. The fundamental basis of all of these models begins with CO2 causes warming. Of COURSE they are all wrong!

Reply to  John Shotsky
May 2, 2020 1:01 pm

Take a look at the Youtube videos of Potholer54, who goes to amazing lengths to try to explain this Inconvenient Truth away.

Reply to  Graemethecat
May 2, 2020 7:17 pm

Potholer is too concerned with perfecting his posh accent to worry about trivial things like reality.

Tom Abbott
Reply to  John Shotsky
May 3, 2020 5:44 am

“The fundamental basis of all of these models begins with CO2 causes warming. Of COURSE they are all wrong!”

Yes, they got the basic foundation of the current Human-Caused Climate Change hoax wrong, and everything that follows from that is also wrong. All the thousands of studies based on this flawed foundation are wrong. All the expensive CO2 mitigation efforts are wrong because they are unnecessary.

Yet, the IPCC keeps pushing these lies onto the public as if they were facts. We will keep pushing back. 🙂

Hocus Locus
Reply to  Tom Abbott
May 6, 2020 8:21 am

The CO2lagsT in plain sight issue has haunted me as well for years now, people seem to delve ever more deeply and desperately to find justification for arbitrary large sensitivity given to CO2 in models. When no ‘extraordinary’ evidence has emerged to back up the original extraordinary claims that promised to refute the simplest conclusion that could be reached by examining Vostok signals.

Chaamjamal
May 2, 2020 12:04 pm

“Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different; the modelling process itself frequently seems to be accepted as evidence that the climate model is correct”

Yes sir. A logic in reverse that leads to the weirdness that agw theory becomes the null hypothesis.

https://tambonthongchai.com/2019/02/03/hidden-hand/

Tom Abbott
Reply to  Chaamjamal
May 3, 2020 5:54 am

“Yes sir. A logic in reverse that leads to the weirdness that agw theory becomes the null hypothesis.”

Good point. The IPCC has turned science upside down.

It should be: Mother Nature (natural variability) is what causes the changes in the Earth’s weather, until proven otherwise.

The IPCC (wrong) way: CO2 is what causes the changes in Earth’s weather, until proven otherwise.

The IPCC has not shown that CO2 causes *any* changes in Earth’s weather, much less that CO2 causes all the changes. This is science by assertion. It’s made the IPCC and its Spawn a lot of money so far. I imagine they will continue this hoax for as long as possible.

Human-Caused Climate Change: The Biggest Hoax in Human History.

John Tillman
May 2, 2020 12:27 pm

More heresy from French paleoclimatologists, this time studying the Cretaceous Hothouse. Their low estimate of CO2 concentration is questionable, however, especially for hottest middle of the period. IMO, plant food level was at least 1000ppm throughout the period, with a high of perhaps 2000 and average of 1700ppm.

CO2 and temperature decoupling at the million-year scale during the Cretaceous Greenhouse

https://www.nature.com/articles/s41598-017-08234-0

CO2 is considered the main greenhouse gas involved in the current global warming and the primary driver of temperature throughout Earth’s history. However, the soundness of this relationship across time scales and during different climate states of the Earth remains uncertain. Here we explore how CO2 and temperature are related in the framework of a Greenhouse climate state of the Earth. We reconstruct the long-term evolution of atmospheric CO2 concentration (pCO2) throughout the Cretaceous from the carbon isotope compositions of the fossil conifer Frenelopsis. We show that pCO2 was in the range of ca. 150–650 ppm during the Barremian–Santonian interval, far less than what is usually considered for the mid Cretaceous. Comparison with available temperature records suggest that although CO2 may have been a main driver of temperature and primary production at kyr or smaller scales, it was a long-term consequence of the climate-biological system, being decoupled or even showing inverse trends with temperature, at Myr scales. Our analysis indicates that the relationship between CO2 and temperature is time scale-dependent at least during Greenhouse climate states of the Earth and that primary productivity is a key factor to consider in both past and future analyses of the climate system.

Reply to  John Tillman
May 2, 2020 12:48 pm

It’s quite clear the Earth’s temperature is controlled by the vast deep ocean’s and the water vapor those oceans transfer to the atmosphere that then convects to move energy vertically. Thus water vapor and the depressed adiabatic lapse rate it creates is the most abundant and consequential GHG.

John Tillman
Reply to  Eric Worrall
May 2, 2020 5:25 pm

However, as for GCMs, clouds do not compute. They are “parameterized”. IOW, whatever GIGO modelers want or need them to be.

“Consensus climate science” is not just unscientific, but seeply antiscientific.

Clyde Spencer
Reply to  Eric Worrall
May 2, 2020 8:48 pm

Eric
Not just the tropics. When I lived in Vermont, I came to expect a thunderstorm every Summer afternoon. While the locals joked that Summer comes on July 4th, and leaves on July 5th, it really is a few days longer than that.

Rob_Dawg
May 2, 2020 12:33 pm

“Back testing” and “hind casting.” That’s like… you know… science?

Given a choice twixt climate prediction and a conjure bag I’d opt for a rolling of the bones.

John Tillman
May 2, 2020 12:34 pm

It takes unrealistically high CO2 levels to derive apparent Cretaceous temperatures, using GCMs. No problem! Just raise the ECS estimate.

Besides hot seas (and perhaps because of them), the mid-Cretaceous was also probably relatively cloudless.

This is from 2002, but the problem persists.

Possible atmospheric CO2 extremes of the Middle Cretaceous (late Albian–Turonian)

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2002PA000778

Atmospheric carbon dioxide (CO2) estimates for the Middle Cretaceous (MK) have a range of >4000 ppm, which presents considerable uncertainty in understanding the possible causes of warmth for this interval. This paper examines the problem of MK greenhouse forcing from an inverse perspective: we estimate upper ocean water temperatures from oxygen isotope measurements of well‐preserved late Albian–Turonian planktonic foraminifera and compare these against temperatures predicted by general circulation model (GCM) experiments with CO2 concentrations of 500–7500 ppm. At least 4500 ppm CO2 is required to match maximum temperatures inferred from well‐preserved planktonic foraminifera. Approximately 900 ppm CO2 produces a good match between the model and the minimum temperature estimates for the MK. An ocean model forced by these two extremes in surface conditions brackets nearly all available bottom water temperature estimates for this interval. The climate model results support nearly the entire range of MK CO2 estimates from proxy data. The ocean model suggests possible MK oceanographic changes from deep water formation in the high latitude region of one hemisphere to the other hemisphere in response to changes in atmospheric temperatures and hydrologic cycle strength. We suggest that, rather than contradicting one another, the various proxy CO2 techniques (especially those with high temporal resolution) may capture true variability in CO2 concentrations and that MK CO2 could have varied by several thousand ppm through this interval.

May 2, 2020 12:44 pm

We’ve been hearing for some time that the AR6 group of models predict more warming than the AR5s. Obviously, the doomsday message hasn’t been getting across properly because the industrialized countries aren’t making enough effort to be fossil fuel free, so the predictions just have to become more dire.

Never mind that the more warming the models predict, the sooner they will become so disconnected from reality that temperature adjustments won’t be able to keep up with them, and their errors will become transparently obvious. The green gangsters are getting panicky and they’re going for broke.

Perhaps AR6 will be their last roll of the dice. We can hope.

May 2, 2020 12:52 pm

“People underestimate the power of models. Observational evidence is not very useful” –

Oh models are quite useful when your purpose is political and not science. The early COVID-19 models were quite useful in crashing the world economy and starting a Great Depression. How else is the UN and globalists going to get rid of the existential threat Donald Trump represents to globalism and UN hegemony, but to kill the US economy and sour the voters to him by November.

John Tillman
Reply to  Joel O'Bryan
May 2, 2020 2:34 pm

Well, surface station “observations” indeed aren’t very useful, except to show that, cooked to a crisp though they be, they still don’t jibe with GIGO GCMs.

And how can you validate models without reference to past observations.

Consensus “climate science” has passed through the postmodern looking glass.

Tom Abbott
Reply to  Joel O'Bryan
May 3, 2020 6:23 am

“Oh models are quite useful when your purpose is political and not science. The early COVID-19 models were quite useful in crashing the world economy and starting a Great Depression.”

Which computer model are you talking about? The University of Washington computer model projected an initial figure of 100,000 dead from Wuhan virus if mitigation actions were taken. The U.S. is currently nearing 70,000 deaths and the dying isn’t over yet, so the University of Washington virus computer model looks like it is in the ballpark and getting more accurate by the day, unfortunately for the innocents that have to suffer.

I guess one of these days all those who have been trashing the virus computer models are going to have to admit they were wrong. Or will they? How can they not when the figures are the figures?

And Btw, I mentioned earlier that a couple of people involved with the Univerisity of Washington virus computer models, a Dr. Murray and another man whose name I didnt catch, both said that their new estimate was a total of 72,000 dead by Aug. 4, 2020.

I said at the time that this figure seemed a little low and I didn’t understand why they were making this prediction. I think it is clear the U.S. will exceed 72,000 dead within the next two weeks, not the next three months, so it looks like in this case the people at the Univeristy of Washington are estimating too low. Don’t ask me why. The trend and numbers look pretty obvious.

Here’s a link to the story:

https://twitter.com/CNN/status/1256041007138791425

“An influential coronavirus model is projecting over 72,000 coronavirus deaths in the US by early August.

Dr. Christopher Murray, who leads the team that did the modeling, explains why the latest projection has moved higher.”

end excerpt

We had 66, 000 dead yesterday. Probably 68,000 today. Getting close to 72,000. I don’t know where this figure came from, but it doesn’t negate the fact that this University of Washington computer model predicted a low-range value of 100,000 dead with mitigation and we are getting close to that number.

The closer we get, the more nervous the virus computer model bashers will get.

I heard some poor, misguided person on tv this morning talking about how the Wuhan virus isn’t any more dangerous than the normal flu. She believed that because that is what she has been told by equally misguided individuals. She was saying this to promote getting back to work.

Lots of people are misguided by the attack on the virus computer models, too. Much ado about nothing. But that “much ado” has misled the public and might cause some of them to make unwise decisions like dismissing the seriousness of the Wuhan virus because they have been led to believe their leaders cannot be trusted because the virus computer models are claimed to be bogus.

I guess I’ll stop there.

Gyan1
May 2, 2020 1:21 pm

50% of the CMIP5 ensemble models have been out of range for 21years vs observations. They should be dropped but that would destroy the dangerous warming meme.

Rud Istvan
May 2, 2020 1:54 pm

I will be interested to learn how INM CM5 does. INM CM4 was the only one in CMIP5 that projected temperatures about ‘right’. INM has been publishing their upgrades and upgrade tests, but I haven’t found any full run stuff yet.

Reply to  Rud Istvan
May 2, 2020 2:31 pm

Rud, this is the latest that I found, posted in January.

https://rclutz.wordpress.com/2020/01/26/climate-models-good-bad-and-ugly/

Once again the Good Model INM-CM4-8 is bucking the model builders’ consensus. The new revised INM model has a reduced ECS and it flipped its cloud feedback from positive to negative.The description of improvements made to the INM modules includes how clouds are handled.

Stevek
May 2, 2020 2:14 pm

How do any of these models reconcile with observed temperatures since 1970 ? My understanding is the observed warming is way below the models. As temperatures rise doesn’t the sensitivity of temperature to co2 decrease.

May 2, 2020 2:26 pm

Contrary to the star trek quote top of this post, I invoke Monty Python:

comment image?w=1000

It’s not dead, just resting.

tom0mason
Reply to  Ron Clutz
May 3, 2020 12:21 pm

Lovely plumage.

🙂

Robber
May 2, 2020 3:21 pm

I’m sure that this new analysis will be reflected in the next round of IPCC reports/sarc.
Yes, no virus can stop the IPCC’s work, even if those international junkets have had to be replaced with virtual meetings.
The next round of reports being developed for AR6: https://www.ipcc.ch/report/sixth-assessment-report-cycle/
The Physical Science Basis, April 2021
Mitigation of Clmate Change, July 2021
Impacts, Adaptation and Vulnerability, October 2021
Synthesis Report: Climate Change 2022, June 2022
For a climate emergency, they seem to dawdle along. Isn’t the science settled? I’m sure Greta could write the report for them by next week.

J Cuttance
May 2, 2020 3:48 pm

News flash: The low-end models are rubbish too

John Tillman
Reply to  J Cuttance
May 2, 2020 5:29 pm

Except for the very lowest-end, ie closest to observed reality, Russian model. All the dozens of others should be tossed. But of course they won’t be, because CLIMATE CHANGE EMERGENCY!!!

BC
May 2, 2020 5:02 pm

I have read of a couple of ‘hindcasting’ analyses that have demonstrated the, at best, ‘shortcomings’ in the usual climate models. For example:
https://www.washingtonexaminer.com/opinion/op-eds/the-great-failure-of-the-climate-models
I recall another from a few years ago – I think it was reported on Jo Nova’s blog – but I can’t find it and don’t appear to have kept a copy of it. But, when searching for it, I found this in my archive:

In a 2009 interview with FlightGlobal, the late former Boeing 747 chief engineer, Joe Sutter, cautioned about reliance on computer-assisted design tools in aircraft development. “There should not be an over-emphasis on what computers tell you, because they only tell you what you tell them to tell you,” he said.

https://www.flightglobal.com/opinion/opinion-what-aircraft-designers-should-learn-from-joe-sutter/121608.article
Very droll!

May 2, 2020 5:16 pm

Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different;

I remember seeing an episode of Modern Marvels on the History Channel years ago.
In the episode they did a thing on a car company using computer models to design bumpers that would stand up to a crash.
When they got a model with the desired results, they didn’t put into production.
They built prototype bumpers, and crashed them, and then compared the results of the computer model vs reality.
The computer models saved them money in design and research but their output wasn’t mistaken for reality. How the prototype performed was the reality that production would be based on.

peterg
May 2, 2020 5:53 pm

I am doing a bit with SPICE, the electronic circuit modelling program. I am always impressed at how well it predicts the DC voltage levels in transistor circuits. Some computer models mostly work.

Clyde Spencer
Reply to  peterg
May 2, 2020 8:56 pm

peterg
As a general rule of thumb, properly constructed deterministic models provide results that are superior to stochastic models. The problem is, not all systems lend themselves to deterministic formulations.

Reply to  peterg
May 3, 2020 6:18 am

DC is pretty easy, there are no frequency related terms and you don’t even need calculus to solve. Now do AC circuits, especially at rf frequencies. When you get the circuit you want, and actually build it, the fun begins. Parasitics, varying permeability, etc. Models can only get you to a starting point, not the final solution.

tom0mason
Reply to  Jim Gorman
May 3, 2020 12:50 pm

Yes Jim Gorman,
It is the tool we have and, as you say is a starting place.
Then later RF circuits can become unstable again when silly environmental matters like (and sometimes only small) changes in temperature, vibration, dust accumulation, and damp interfere with all those carefully constructed mathematical parameters of the components, cables, connectors, and the PCB board materials.

May 2, 2020 5:57 pm

Some of the latest climate models provide unrealistically high projections of future warming

That statement implies that some climate models provide realistically high projections of future warming.

How does anyone decide that some projection is realistically high? Is there a committee that decides? A consensus of climate casuists?

How does one judge any given projection is realistic at all? There is no adequate physical theory of the climate available to make any estimate.

Critical thinkers are at a premium in consensus climatology. One may analogize that the bloody-minded CO2-monotheists have driven off all the free-thinking heretics.

These people live in a science-abusive cloud-cuckoo land.

Clyde Spencer
Reply to  Pat Frank
May 2, 2020 8:58 pm

Come on, Pat! You’re just being realistic. 🙂

Reply to  Clyde Spencer
May 2, 2020 9:36 pm

Careful, Clyde. No expects the Spanish Inquisition. 🙂

John Bruyn
May 2, 2020 5:59 pm

There are 4 major points climate modellers need to consider very seriously:
1. The equatorial speed of Earth’s rotation makes all gases to circulate through the atmosphere vertically and act as surface cooling agents;
2. CO2 in polar ice cores is lower than in the tropics as snow itself has minimal CO2 if any, which means that the air trapped in fir is from the warmer seasons only and give a false impression if taken as an annual value;
3. Variations in Earth’s orbits and orientations with respect to the sun vary all the time as to affect the sun’s illumination of the poles.
4. Anthropogenic CO2 additions to the trillions of tonnes circulating through the atmosphere and sequestrated annually by photosynthesis and by precipitation (as H2CO3) cannot be cummulative and if anything would have a net global cooling effect.

John Shotsky
Reply to  John Bruyn
May 2, 2020 7:05 pm

There is one more thing about Co2…
It is ALL emitted at the surface, and it is ALL absorbed at the surface. It has a higher concentration at the surface for those very reasons. To call it ‘well mixed’, is plain wrong. Most of it is at the surface because the atmosphere is thickest at the surface. So, well-mixed is a misnomer. It doesn’t ‘rain’ Co2, and Co2 isn’t lighter than the other major constituents of the atmosphere – in fact, it is heavier, which is why it settles and puts fires out. So, to claim that there is some ‘blanket’ of CO2 ‘out there’ that traps heat is plain disingenuous, or is claimed by those that have no clue about gases.

Gerald Machnee
May 2, 2020 6:40 pm

May be overly sensitive to CO2?????????

Todd Peterson
May 2, 2020 7:43 pm

Erick, Your last sentence made Richard P Feynman PROUD.

Todd Peterson
May 2, 2020 7:50 pm

Eric, Your last sentence made Richard P Feynman PROUD.

Steven Mosher
May 3, 2020 1:04 am

“University of Michigan researchers have done the unthinkable, and checked climate model predictions against available paleo-climate data to see if the predictions are plausible.”

It is done all the time.

One problem is this.

1. your paleo “data” is not readily comparable. so yu have to turn it into temperatures. you know
trying rings into Temperature. This is a MODEL.
2. Your GCM puts out data. This is not observational data. It too is a MODEL.

So you compare the two models of temperatures. One statistical derived from proxies. One physical derived from first principles.

Comparing two models is hard. Long ago I was sitting in an AGU session that was discussing how to compare models with paelo data. The question was raised. If the paelo model of rainfall doesnt match the GCM model of rainfall, Which is correct?

Anyway, Comparing models to paleo data is nothing new. There is a whole project devoted to it
PMIP

https://pmip.lsce.ipsl.fr/

I do wish authors of posts would check the actual activities that people engage in.

https://pmip.lsce.ipsl.fr/about_us/history

The Paleoclimate Modelling Intercomparison Project (PMIP) emerged from two parallel endeavours. During the 1980s, the Cooperative Holocene Mapping Project showed the utility of combining model simulations and syntheses of paleoenvironmental data to analyse the mechanisms of climate change. At the same time, the climate-modelling community was becoming increasingly aware that responses to changes in forcing were model dependent. The need to investigate this phenomenon led to the establishment of the Atmospheric Modelling Intercomparison Project (AMIP) – the first of a plethora of model intercomparison projects of which PMIP (and CMIP1) are part.

The specific aim of PMIP was, and continues to be, to provide a mechanism for coordinating paleoclimate modelling and model-evaluation activities to understand the mechanisms of climate change and the role of climate feedbacks. To facilitate model evaluation, PMIP has actively fostered paleodata synthesis and the development of benchmark datasets for model evaluation. During its initial phase (PMIP1), the project focused on atmosphere-only general circulation models; comparisons of coupled ocean-atmosphere and ocean-atmosphere-vegetation models were the focus of PMIP2.

In PMIP3, project members are running the CMIP5 paleoclimate simulations and will lead the evaluation of these simulations. However, PMIP3 will also run experiments for non-CMIP5 time periods and will be coordinating the analysis and exploitation of transient simulations across intervals of rapid climate change in the past. PMIP also provides an umbrella for model intercomparison projects focusing on specific times in the past, such as the Pliocene Modelling Intercomparison Project (PlioMIP), or on particular aspects of the paleoclimate system, such as the Paleo Carbon Modelling Intercomparison Project (PCMIP).

PMIP membership is open to all paleoclimatologists, and we actively encourage the use of archived simulations and data products for model diagnosis or to investigate the causes and impacts of past climate changes.

Quoted from Braconnot et al, “Evaluation of climate models using palaeoclimatic data”, Nature Climate Change 2, 417-424 (2012), doi:10.1038/nclimate1456

Steven Mosher
Reply to  Eric Worrall
May 3, 2020 5:06 am

Your claim

‘University of Michigan researchers have done the unthinkable, and checked climate model predictions against available paleo-climate data to see if the predictions are plausible.”

It is not unthinkable. they have done this for a long time.
Now you add the quibble… but eocene

you didn’t check

https://www2.atmos.umd.edu/~dankd/Eocene.html

https://www.ncdc.noaa.gov/global-warming/early-eocene-period

https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_Chapter05_FINAL.pdf

now I ask you

here is your claim

‘University of Michigan researchers have done the unthinkable, and checked climate model predictions against available paleo-climate data to see if the predictions are plausible.”

how could it be unthinkable when it has been done before and published years ago?

MAYBE you should just report the facts and not try to color them with false claims.

Reply to  Eric Worrall
May 4, 2020 2:29 pm

So just what are they PIMPing?
(Sorry. Had to be said.)

CheshireRed
May 3, 2020 4:08 am

High sensitivity has always been the Achilles heel of climate models, with the only surprise being climate sceptics haven’t challenged alarmists projected warming claims anywhere near strongly enough.

Geological records show higher CO2 in the past yet there was no ‘runaway warming’. Have the laws of physics changed or something? Erm…nope. That nails the ‘tipping points’ / positive feedbacks / amplification / high forcing nonsense. Neither proxy data nor today’s observations – if it was true we wouldn’t be here now, support the theory, so ‘runaway warming’ is falsified right there.

Way past time AGW nonsense was put to bed forever.

Steven Mosher
May 3, 2020 5:11 am

“Most fields of science don’t accept a model unless it has been rigorously validated against available data, but climate science is different; ”

Models are validated against a SPECIFICATION.

For example, My specification can be “model the temperature of X to within 2K of the measured K
A model that exceeded 2K of the actual would fail validation. One that was off by 1.9K would pass
validation.

And sometimes the spec is rather easy to hit. The model shall be more skillful than a naive forecast.

sycomputing
Reply to  Steven Mosher
May 3, 2020 10:11 am

Models are validated against a SPECIFICATION.

That’s interesting. Is this regardless of industry?

Mark Pawelek
May 3, 2020 5:23 am

This falsification of CESM2 will probably find modelers tweaking their model to make CESM3, a model which will be equal CESM2 in BS.

Dr Roger Higgs
May 3, 2020 12:41 pm