Robert L. Bradley Jr. – June 23, 2021

“Climate modeling is central to climate science….” (Stephen Koonin, below)
When the history of climate modeling comes to be written in some distant future, the major story may well be how the easy, computable answer turned out to be the wrong one, resulting in overestimated warming and false scares from the enhanced (man-made) greenhouse effect.
Meanwhile, empirical and theoretical evidence is mounting toward this game-changing verdict despite the best efforts of the establishment to look the other way.
Consider a press release this month from the University of Colorado Boulder, “Warmer Clouds, Cooler Planet,” subtitled “precipitation-related ‘feedback’ cycle means models may overestimate warming.”
“Today’s climate models are showing more warmth than their predecessors,” the announcement begins.
But a paper published this week highlights how models may err on the side of too much warming: Earth’s warming clouds cool the surface more than anticipated, the German-led team reported in Nature Climate Change.
“Our work shows that the increase in climate sensitivity from the last generation of climate models should be taken with a huge grain of salt,” said CIRES Fellow Jennifer Kay, an associate professor of atmospheric and oceanic sciences at CU Boulder and co-author on the paper.
The press release goes on to state how incorporating this negative feedback will improve next-generation climate models, something that is of the utmost importance given the upcoming Sixth Assessment of the Intergovernmental Panel on Climate Change (IPCC). But will conflicted modelers and the politicized IPCC be upfront with the elephant in the room?
Background
Strong positive feedbacks from the release of carbon dioxide (CO2) and other manmade greenhouse gases (GHG) are what turn a modest and even positive warming into the opposite. The assumption has been that increased evaporation in a warmer world (from oceans, primarily) causes a strongly positive feedback, doubling or even tripling the primary warming.
In technical terms, water molecules trap heat, and clouds or vapor in the upper tropical troposphere – where the air is extremely dry – trap substantially more heat, thickening the greenhouse. How water inhabits this upper layer (≈30,000–50,000 feet) to either block (magnify) or release (diminish) the heat is in debate, leaving the sign of the externality unknown for climate economics. And it is the upper troposphere where climate models are data-confounding.
Assuming fixed relative atmospheric humidity allows modelers to invoke ceteris paribus against altered physical processes that might well negate the secondary warming. This controversial assumption opens the door for hyper-modeling that is at odds with reality. (For economists, the analogy would be assuming “perfect competition” to unleash hyper theorizing.)
For decades, model critics have questioned the simplified treatment of complexity. Meanwhile, climate models have predicted much more warming than has transpired.
Theoreticians have long been at odds with model technicians. MIT’s Richard Lindzen, author of Dynamics in Atmospheric Physics, has advanced different hypotheses about why water-vapor feedback is much less than modeled. Judith Curry, whose blog Climate Etc. is a leading source to follow physical-science and related developments, is another critic of high-sensitivity models.
“There’s a range of credible perspectives that I try to consider,” she states. “It’s a very complex problem, and we don’t have the answers yet.”
And now we have way too much confidence in some very dubious climate models and inadequate data sets. And we’re not really framing the problem broadly enough to … make credible projections about the range of things that we could possibly see in the 21st century.
Mainstream Recognition
Climate scientists know that climate models are extremely complicated and fragile. In What We Know About Climate Change (2018, p. 30), Kerry Emanuel of MIT explains:
Computer modeling of global climate is perhaps the most complex endeavor ever undertaken by humankind. A typical climate model consists of millions of lines of computer instructions designed to simulate an enormous range of physical phenomena….
Although the equations representing the physical and chemical processes in the climate system are well known, they cannot be solved exactly. …. The problem here is that many important processes happen at much smaller scales.
The parameterization problem is akin to the fallacies of macroeconomics, where the crucial causality of individual action is ignored. Microphysics is the driver of climate change, yet the equations are unsettled and sub-grid scale. Like macroeconomics, macro-climatology should have been highly qualified and demoted long ago.
My mentor Gerald North, former head of the climatology department at Texas A&M, had a number of observations about the crude, overrated nature of climate models back in 1998–99 that are still relevant today.
We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.
There is a good reason for a lack of consensus on the science. It is simply too early. The problem is difficult, and there are pitifully few ways to test climate models.
One has to fill in what goes on between 5 km and the surface. The standard way is through atmospheric models. I cannot make a better excuse.
The different models couple to the oceans differently. There is quite a bit of slack here (undetermined fudge factors). If a model is too sensitive, one can just couple in a little more ocean to make it agree with the record. This is why models with different sensitivities all seem to mock the record about equally well. (Modelers would be insulted by my explanation, but I think it is correct.)
[Model results] could also be sociological: getting the socially acceptable answer.
The IPCC 5th assessment (2013), the “official” or mainstream report, recognizes fundamental uncertainty while accepting model methodology and results at face value. “The complexity of models,” it is stated (p. 824), “has increased substantially since the IPCC First Assessment Report in 1990….”
However, every bit of added complexity, while intended to improve some aspect of simulated climate, also introduces new sources of possible error (e.g., via uncertain parameters) and new interactions between model components that may, if only temporarily, degrade a model’s simulation of other aspects of the climate system. Furthermore, despite the progress that has been made, scientific uncertainty regarding the details of many processes remains.
The humbling nature of climate modeling was publicized by The Economist in 2019. “Predicting the Climate Future is Riddled with Uncertainty” explained:
[Climate modeling] is a complicated process. A model’s code has to represent everything from the laws of thermodynamics to the intricacies of how air molecules interact with one another. Running it means performing quadrillions of mathematical operations a second—hence the need for supercomputers.
[S]uch models are crude. Millions of grid cells might sound a lot, but it means that an individual cell’s area, seen from above, is about 10,000 square kilometres, while an air or ocean cell may have a volume of as much as 100,000km3. Treating these enormous areas and volumes as points misses much detail.
Clouds, for instance, present a particular challenge to modellers. Depending on how they form and where, they can either warm or cool the climate. But a cloud is far smaller than even the smallest grid-cells, so its individual effect cannot be captured. The same is true of regional effects caused by things like topographic features or islands.
Building models is also made hard by lack of knowledge about the ways that carbon—the central atom in molecules of carbon dioxide and methane, the main heat-capturing greenhouse gases other than water vapour—moves through the environment.
“But researchers are doing the best they can,” The Economist concluded.
Climate models, in fact, are significantly overestimating warming, even by one-half. And the gap is widening as a coolish 2021 is well underway. And as for the future, anthropogenic warming is constrained by the logarithmic rather than linear effect of GHG forcing. The saturation effect means that as the atmosphere contains more CO2, the warming increase becomes less and less. The warming from a doubling of CO2, in other words, does not reoccur at a tripling but a quadrupling.
The mitigation window is rapidly closing, in other words, explaining the shrill language from prominent politicians. But it is the underlying climate models, not the climate itself, that is running out of time.
“Unsettled” Goes Mainstream
The crude methodology and false conclusions of climate modeling is emerging from the shadows. Physicist and computer expert Steven Koonin, in his influential Unsettled: What Climate Science Tells Us, What it Doesn’t, and Why It Matters (chapter 4) explains:
Climate modeling is central to climate science…. Yet many important phenomena occur on scales smaller than the 100 km (60 mile) grid size (such as mountains, clouds, and thunderstorms), and so researchers must make “subgrid” assumptions to build a complete model….
Since the results generally don’t much look like the climate system we observe, modelers then adjust (“tune”) these parameters to get a better match with some features of the real climate system.
Undertuning leaves the model unrealistic, but overtuning “risks cooking the books—that is, predetermining the answer,” adds Koonin. He then quotes from a paper co-authored by 15 world-class modelers:
… tuning is often seen as an unavoidable but dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature…. Tuning may be seen indeed as an unspeakable way to compensate for model errors.
Conclusion
Climate modeling has arguably been worse than nothing because false information has been presented as true and “consensus.” Alarmism and disruptive policy activism (forced substitution of inferior energies; challenges to lifestyle norms) have taken on a life of their own. Fire, ready, aim has substituted for prudence, from science to public policy.
Data continue to confound naïve climate models. Very difficult theory is slowly but surely explaining why. The climate debate is back to the physical science, where it never should have left.
The “State of the Art Model” is still in Kindergarten.
More like sperm searching for egg…
And not finding it.
It’s all about the magical forcing.
Hurry hurry hurry. We have to save the earth before we find out we don’t. We are running out of time.
Doing nothing is better than doing something stupid and wasteful.
The principal principle: just because you can, doesn’t mean you should. Discerning minds are a prerequisite.
Don’t just do something, stand there.
Doing nothing is better than doing something about nothing.
….or about a non-problem.
A big problem I have with climate models, and why I consider them fake science, is: I only ever see reports comparing their net average projections. These models all claim to be simulations. If they are simulations, then it should also be possible to compare the regional projections which each model makes as well. I’ve never seen this done. I suspect regional climate projections of the models vary far more widely, and wildly than net aggregate projections; and that this wide variation is the reason I’ve never seen regional variations compared.
If they are simulations, then when two models have the same, or very similar, global net average climate projections – then each step which took model projections there (each regional climate) should be the same, for these two models. A better picture of model skill, and competence will be found by comparing how variable regional climate projections stack up when both models have similar net averages.
Mark, you wrote, “If they are simulations, then it should also be possible to compare the regional projections which each model makes as well. I’ve never seen this done.”
In days past here at WUWT (and elsewhere), we (I used to be a regular contributor here) used to prepare posts that compared regional data (for continents,countries and oceans) to the outputs of regional projections from global models. The models performed horrendously. The climate “science” community’s explanation was that the models were designed for global use, not regional. The well-known (by the modelers) poor performance at less-than-global levels caused the modelers to prepare “regional models”, which they piggybacked off of the global model outputs.
Sample of model-data comparison of global models presented on per-ocean basis from my blog:
Maybe the IPCC’s Modelers Should Try to Simulate Earth’s Oceans | Bob Tisdale – Climate Observations (wordpress.com)
Cross post here at WUWT:
Maybe the IPCC’s Modelers Should Try to Simulate Earth’s Oceans – Watts Up With That?
Regards,
Bob
Climate science is the only science in which you are permitted to average a bunch of wrong answers and then claim that the results are correct.
Bet that wouldn’t work so well in Chemistry or Geometry or Engineering or…you get the idea!
Isn’t Jennifer Kay’s statement refreshing though?
“Our work shows that the increase in climate sensitivity from the last generation of climate models should be taken with a huge grain of salt,” said CIRES Fellow Jennifer Kay.
Not only correct, but better than the best single prediction.
It is my understanding that regional model comparisons are often, if not usually, at odds. That is especially true for precipitation, where some models predict increased precipitation, and others, reduced — and even drought!
The one thing that models seem to have general agreement with is in future temperature increases. However, as I demonstrated in a past article, a simple linear extrapolation of Hansen’s own data does a better job of predicting temperatures than his models.
https://wattsupwiththat.com/2018/06/30/analysis-of-james-hansens-1988-prediction-of-global-temperatures-for-the-last-30-years/
“If they are simulations, then it should also be possible to compare the regional projections which each model makes as well. I’ve never seen this done.”
MP
From my reading your supposition of wide variation on regional comparisons is correct. This has not deterred the use of models for regional planning. I commented on the inappropriateness of this practice when state money was used to develop state and regional plans for Montana’s infrastructure and economy. They just chose one model and looked at how our climate will change for the next 100 years. Not only are the regional comparisons poor they are incoherent from one model to the next so SW Montana will be really wet or suffer drought depending on the model you choose. The study is worse than useless.
Hopefully you pointed that out to them and informed them all they needed to do was to be prepared for floods and droughts (and anything in between) and everything would be just peachy.
The answer for both, of course, is “Build dams.”
Of course, that 6,000 year old solution (probably older) wasn’t produced by a $10 million dollar grant study, of which $9,900,000 went into various parasite pockets.
Climate modellers, who presume to predict local or regional climate change, should be required to post multi-million dollar bonds for a period of 20 years which they would forfeit if their forecast was not accurate.
They’re regional failures with temperature and factors that affect temperature like clouds, precipitation, etc.
But if the global temperature anomaly is reasonably close between modeled and observed, the sum of all of those bad model results is considered “accurate.”
One way of determining how much actual effort commenters who profess an interest in the subject of “climate change / climate science” have put into their personal “research / investigation” of the subject is to see how often they cite the “gold standard” IPCC WG1 reports, AKA “The Physical Science Basis”, the most recent of which was AR5 back in 2013.
Chapter 10 of AR5 is titled :
“Detection and Attribution of Climate Change: from Global to Regional“.
At the end of chapter 11, “Near-term Climate Change: Projections and Predictability”, on pages 1013 and 1014, you will discover :
“Box 11.2 : Ability of Climate Models to Simulate Observed Regional Trends“.
In addition to the (liberal) sprinkling of sub-sections on “regional” aspects throughout AR5, for both “reconstructions” of the past and “projections” into the future, when it comes to “projections” of how the climate might evolve over the 21st century (and beyond) the IPCC editors of AR5 decided that they would finish the whole WG1 report with chapter 14, dedicated entirely to :
“Climate Phenomena and their Relevance for Future Regional Climate Change”.
– – – – – – – – – – – – – – – – – – – –
PS : AR6 is apparently due to be finalised this autumn, after being delayed due to the pandemic, exact release date still TBD.
From their press release from the 28th of May :
I am guessing that I am not the only person looking forward, as the saying goes, “with eager anticipation” for what the IPCC has to say about how the thinking on both global and regional reconstructions and projections has evolved over the last 8 years.
Forget it all! Even Curry or Lindzen have no clue of the real problem in ECS estimates. I am still writing on a series of articles on ECS and all that went wrong. Big part of it is of course on modelling based on all necessary corrections. I can already tell where it is heading to, which is about 0.2 – 0.3K in ECS.
Yet there is a sneak preview everyone can enjoy already. Load modtran, switch to “1976 US standard atmosphere” and add “Stratus… Top 2.0km”, to have some clouds in the model. I mean we need to get as close to 240W/m2 in emissions as possible.
http://climatemodels.uchicago.edu/modtran/modtran.html
Then emissions are 242.91W/m2. Double CO2 to 800ppm and emissions will be at 240.618W/m2. Then we need to add a temperature offset of 0.61K, while holding vapor constant. That is only(!) 0.61K to 2xCO2. Holding rel. humidity constant this temperature offset needs to be 0.78. And that is 2xCO2 plus vapor feedback, only about a quarter of average ECS estimates of 3K (+/- 1.5). Also vapor feedback only adds 28% to 2xCO2.
The big question, is modtran totally wrong? Actually not, it is simply not incorporating the many blunders ECS estimates are build on. Rather modtran errs in the opposite direction, as it fails to flatten the lapse rate (it is not a parameter you can model).
The version of MODTRAN is limited by the “wrapper” the folks at U of Chicago provided to make it convenient to use. Indeed, you are correct that you cannot do much more than assume a particular atmospheric model and then adjust it up or down by a fixed temperature. You can do nothing to the lapse rate, and also cannot vary the viewing direction and a whole slew of other things. However, it still has uses. I do wish we had a more flexible wrapper provided.
But what would happen with the lapse rate? Any increase in temperature will flatten the lapse rate, so that (higher) troposphere temperatures increase more strongly than the surface. You would have more than 0.78K up the troposphere, and even less than 0.78K at the surface. There is no remedy for consensus ECS figures anyway.
See my comment to Andy May on his immediately preceding post. And see my post here at WUWT about 6 years ago. Search ‘The trouble with climate models’ for the direct link. Unavoidable parameterization drags in the attribution problem with natural variation.
IMO the emerging CMIP6 results are apparently looking more ridiculous than CMIP5, so slowly the realization is more broadly dawning that the whole climate modeling exercise was fubar from the beginning.
Even if the climate models weren’t fubar, the realization is also slowly dawning that the green solution of renewables at any meaningful penetration is fatal to grid stability. Renewables are intermittent and provide no grid inertia. The extra costs to fix those grid problems (backup generation, synchronous condensers) make renewables fatally economic no matter their direct high (hence subsidized) costs.
Rud, are the ECS values for #6 available yet?
I have seen (but did not save) a preliminary graphical analysis. The low was 1.8, of course the Russian INM now CM5. The high was over 5, dunno who. if I recall correctly the mean was close to 4 and the median was over 3.5. Both big problems. I’ll go to my main machine, see if I can relocate it, and if so will post the graphic here.
Did not like the poor resolution image below so just tracked down the original. It is at carbonbrief.org. The URL to the post with the high res very easily readable CMIP6 ECS graphic is:
https://www.carbonbrief.org/CMIP6-the-next-generation-of-climate-models-explained
Here you go. Sorry about the poor resolution. Captured tiny, then expanded.
More resolution for you. Click to enlarge.
Ditto.
Excellent, thanks.
I’d call it something more like redesign and rebuild of an ad-hoc system to a new set of requirements. Not surprising at all that re-making the grid to do something other than what it “jest growed up fer” should cost more than just about any other “solution” for an undefined problem. Renewable and Carbon Free are a con, and the grid was never fit for generation or storage.
Here is the ECS histogram, note the highly non-normal distribution. One could argue that the values less than ECS=3.5 are normal, though.
The first legitimate response to consider when we’re presented with any problem is –
DO NOTHING.
Then consider the likely (not “maybe”) ramifications, and assess their impacts.
If our politicians had taken this rational approach to Al Gore’s risible scare mongering back in the day, would we now be kinda pleased with how the planet was greening, and the weather was behaving pretty much as it always has?
What should happen immediately about climate models is that they get completely de-coupled as inputs to energy development and distribution policies & programs.
The modelers should take a careful look at the Russian model and determine what is different between the Western World models and the Russian model. That would provide insight on the sensitivity of their models to some of the critical parameters.
What I find incredulous is that the models seem to be getting worse over time instead of better. To me, that is a strong suggestion that there are some fundamental problems resulting from unverified assumptions, probably in the parameterizations — unless the warm bias is a purposeful attempt to increase their funding.
You think they are getting worse because you think their objective is to accurately forecast climate. And you therefore think that their obvious failure to do this is a defect.
Its actually an important and excellent feature. The point of the models is not to forecast anything. Its to act as a reference point which the activists can point to as ‘the science’. It also acts as a reference point which, when doubted, can be used to justify the claim that critics are in denial, denialists.
It is quite important actually, for an activist, to have unrealistic models. If you have realistic ones, you get drawn into practical debates about what they imply for policy. You do not want that. What you want is for models which cannot be justified to imply policies that it is impossible to imagine anyone implementing.
Remember, you are an activist, what you want is to take unconditional power. Only once. That is all that is necessary.
This is not about warming and its not about climate or climate models either. These are just the pegs to hang the organizational drive on.
Bang on Michel, a recent interview with a co-founder of Extinction Rebellion proves your point.
It’s hard to watch because the guy’s an utter moron, but revealing non the less.
I saw this interview too – the guy comes across as completely insane and fixated on mass rape, total destruction and devastation – he’s an advert for treating XR as a terrorist organisation.
Ron Clutz has done that on his blog in some detail both for INM CM4 and CM5. His several posts on this are well worth reading.
IIRC, for CM4 the big two differences were more ocean thermal inertia and more rainfall (so lower WVF). ECS was ~2.2. The big improvement in CM5 was further improvement in modeled to now ARGO observed precipitation, which lowers WVF further. ECS ~1.8—in the observational ballpark.
Only it ignores the fact that the observations don’t reflect a CO2 effect but natural changes to the climate that have nothing to do with CO2.
Thanks for the background, Rud.
The trouble is that the modelers are desperately trying to give the politicians their moneys worth.
With science, the scientific method is best.
With a multitude of scientist dependent on government welfare, the sociopolitical science of consensus keeps the paycheck coming in. So the big battle is finding creative ways to keep the alarmist grift from leaking out … oops!
As I keep saying, as do several others, if this were true, then ANY warming, for ANY reason, would always necessarily result in runaway global warming. It never has, therefore the hypothesis is quite obviously and demonstrably false.
How can even a moderately intelligent and moderately educated person possibly believe this obviously and demonstrably false hypothesis?
I’ve been saying the same for years, and not just here. Even back when you could post something remotely skeptical on the Accuweather climate blog and not be “moderated” I said that many times – and not a single one of the True Believers ever had an answer to it.
The Earth’s climate history shows absolutely no evidence of any positive feedback and every indication that the feedbacks are overwhelmingly negative, offsetting, stabilizing feedbacks (in total). If not, the long periods of stability seen in the Earth’s climate history simply couldn’t exist, and the climate history would show constant wild fluctuation to extremes, which has never happened.
But this time it is different.
CO2 concentrations in the atmosphere have been MUCH higher than today – much higher even than any number we’re worried about – yet there was no runaway warming then. So yeah, I agree, if it didn’t happen when CO2 was ~5000ppm or higher, why would it happen now?
What is the mechanism that prevents 5000ppm from turning runaway but allows 500ppm to do so?
Maybe they figure CO2 got a college degree since then, so that now it “knows” how to impact the temperature.
While they continue to trash Entropy and the 2nd law, take the 1st law too literally, misunderstand Cause & Effect and endow computers & CO2 with powers that they simply haven’t got, they can only ever come up with Magically Conceived Garbage and Sheer Fantasy
When your models are simply vehicles perpetuating the same lie over and over, well, yea, it is worse than nothing. Holding these lie spew liar directly, personally and financially responsible for their lies will make it stop. Winnie Mandela was the UN Advocate For Necklacing, I say we follow her lead.
Uh, NO. Try “that might well negate any and all warming whatsoever.” THAT is what “observations” support, and observations trump theory, or hypothesis.
More like “we are being asked how the body’s temperature is determined, and whether the slightest deviation should be cause for panic.” Let’s stop feeding the “crisis” bullshit, there isn’t one.
The HYPOTHETICAL warming, you mean – NO warming from increasing CO2 levels has ever been empirically demonstrated here in the real world.
And this needs to stop – because The Only Crisis is Their Supposed “Cure.”
Between the upcoming climate modeling disaster and the already existing Covid modeling disaster, I’m afraid computer models are facing a huge crisis of confidence. OMG, this means politicians may have to make their own decisions! Which is worse, climate models or politicians? We are doomed.
https://www.heritage.org/public-health/commentary/failures-influential-covid-19-model-used-justify-lockdowns
The longer I live, the more I understand why Richard Feynman said: “Science is the belief in the ignorance of experts.”
Full disclosure, I used to write petrophysical models. I know, I know, you’re saying where is the rope?
“… OMG, this means politicians may have to make their own decisions! …”.
Some have gone too far Boris Johnson for instance, hopefully Biden is out of office before he can do too much damage and not only to the US.
I’m hoping there is a reckoning someday and that I’m still around to see it.
Andy, I’m of the opinion that one of the few fields of research that is healthy and coming up with new, useful surprises is Materials Science. I would attribute that to having clear-cut objectives with respect to the properties necessary to be useful. They don’t predict such and such a material could be useful in 30 years. It either does the job now or it doesn’t. If it doesn’t, then onto something else until they find what they are looking for.
Petrophysical models are similar. You make a prediction, and drill some holes. If they don’t produce it is back to the ‘drawing board.’ What climate science is missing is immediate, ruthless feedback! They don’t operate within the spirit of the Scientific Method. They manage to skate by on promises of “may, could, possibly, etc.”
And by having their “predictions,” or whatever euphemism they call them, far enough in the future that the self-appointed “prophets” won’t be around to answer for just how completely wrong they are.
The good, the bad, and the monotonically divergent.
Given four factors:
#1 Adequate knowledge of how climate works.
#2 Adequate computer power.
#3 Models accurately simulating climate.
#4 Adequate intellectual honesty among modelers.
Then: The models would quickly converge toward reality.
Even without the first three, given adequate intellectual honesty among modelers, either:
A The models would converge toward reality, based upon the most accurate few for each run.
B Or, the modelers would explicitly state the models should not be used for any purpose, except for further development of models until they do accurately represent reality, and would publicly protest their use for other purposes.
I have long since judged the intellectual honesty of modelers, with a few exceptions, too low to find models of interest, except to point out their defects.
Then the climate models begin to confound the naive climate scientists.
Then the climate scientists confound the naive high school dropouts.
Then the high school dropouts confound the naive UN policy makers with a climate tantrum.
Climate modeling was officially exploded on September 6, 2019.
The complete demolition coming after a warning shot across the modelers’ bow in 2008. Climate models have no predictive value.
CMIP6 models are no improvement. Their air temperature projections are physically meaningless.
If anything, Steve Koonin’s cautions are very understated.
And I recall Gerald North’s boast of years ago, that, “We know all the forcings,” in defense of the accuracy of the warming forecast. North expressed just such confidence in an AMS note in 2008 (pdf).
It’s no recent revelation that climate models are worse than useless. They are the beating heart of the madness that is the CO2 cult.
They have rationalized the greatest theft of wealth in history, from the hard-working middle class to the already rich. And they have irrationalized the mass murder of the retired poor as excess Winter fuel poverty deaths.
Judging by the CIMP6 ECS values, they are worse:
“are no improvement” link broken.
Sorry. Here it is: https://wattsupwiththat.com/2020/10/27/cmip6-update/
North covered for the mainstream when he had to. That was years after we worked together. A lot of peer pressure, particularly from Andrew Dessler at the TAMU department who held a honorary conference for North.
“North covered for the mainstream when he had to.”
He was under no compulsion, Robert.
If North knew that climate models could not predict the response of the climate to CO2 emissions, then he lied in their support.
If North did not know that climate models could not predict the response of the climate to CO2 emissions, then he was incompetent in their support.
If you see an alternative possibility, I’d like to know what it is.
If my experience with climate modelers has any bearing on his case, then the second possibility is the far more likely.
It is why I gave it up years ago, haven’t bothered to keep track of their modeling baloney since 2008, they have been wrong every time and with no improvement in sight, it was time for me to move on.
I just ran across this in the relatively new Texas Water Journal. Have read some of North’s work along with Bomar and Norwine, seems reasonable. Not so sure about this, but Texas does have a water problem, several actually.
Banner, J. L., C. S. Jackson, Z.-L. Yang, K. Hayhoe, C. Woodhouse, L. Gulden, K. Jacobs, G. North, R. Leung, W. Washington, X. Jian, and R. Casteel. 2010. Climate Change Impacts on Texas Water:A White Paper Assessment of the Past, Present and Future and Recommendations for Action. Texas Water Journal l(1):1-19. https://doi.org/10.21423/twj.v1i1.1043
“Climate models are better at predicting mean climate than climate variability and climate extremes.”
People often speak about parameterized clouds and thuderstorms, etc., being the achilles heels of climate models, but here is something to ponder — relative humidity. The assumption of constant relative humidity is a lazy, probably worthless, assumption that lies behind models badly over-estimating warming. What justifies such an assumption? Well, “Clausius-Clapeyron”, says the climate modeler with a haughty air. But Clausius-Clapeyron is a equilibrium relationship. The earth is not currently at equilibrium, has never been so, and is in fact badly out of equilibrium at times. So, I think the case for constant relative humidity is phoney; simply a convenience hiding a withered effort in a proud cloak.
What’s the alternative? That would be to determine humidity balance from first principles. That is nigh impossible except in cases where the physics is well-understood, such as conduction. The next best thing is what we would do in engineering, which is to find some parameters describing the situation through experiment. These “correlation relationships” determined from experiments are accurate at best to plus or minus 10% and more likely 20%, especially in the more difficult areas of heat/mass transfer such as convection.
So the modellers are sort of stuck in between the rock and hard place. An absolutely essential part of their “theory” is just convenience enabling them to find an answer.
“The assumption of constant relative humidity is a lazy…”
GCMs do not ever assume constant relative humidity. They couldn’t, anyway.
“That would be to determine humidity balance from first principles.”
That is exactly what they do. They calculate surface evaporation and precipitation, and thereafter it is mass conservation with advection. The need to conserve mass is one reason why they just couldn’t set RH to be constant.
”…not ever….” Models certainly assume clouds form at 100% RH and check whether mixed parcels have exceeded 100% RH. So thats’s one constant RH value that models DO have to use.
Of course. What would you expect? RH>1?
Sorry my previous reply was a bit short. It deserves a longer answer.
There are in fact two points at which RH=1 is more or less assumed. This is at liquid water interface, and at condensation, It’s a bit more complicated, but RH=1 is the effect. This is based on well known physics, and I people would be upset if they said anything else.
The outcome that RH is more or less constant comes from these two fixed points and the fact that the subsequent advection pattern (with RH less than 1) doesn’t change that much as the atmosphere warms. But it could; there is no inability to change built into the models.
Supersaturation happens often enough that it probably should be considered.
Yes, they consider supersaturation and nucleation. Here is the CAM 3 version. It’s quite elaborate.
Then why do climate scientists talk about it when they talk about water vapor feedback? Why don’t you, and other interested parties actually correct general misunderstandings of the public along such lines? Why does the MODTRAN interface provide only this and constant absolute humidity as options? And making a water vapor balance from first principles? You have to be kidding me. Or they “couldn’t hold relative humidity constant, anyway”? That’s crazy, that is simply a parameter that one could use a look-up table to represent. If global models can’t do that then what else can’t they do? I don’t want to seem unnecessarily excited here, nor do I wish to unload on you specifically, but you are telling me that parameterizations are “calculations from first principles”. They are not.
We don’t fully understand evaporation and precipitation well enough at the microscopic level in all of the relevant circumstances to do what you are claiming. What you have made by your statement above is what is known as an “ex cathedra” argument. Ex cathedra arguments aren’t credible…you know, nullius in verba. I am sick of being told. I’d like to be convinced.
There are balances crucial to believing numerical simulations work correctly, and in GCMs it seems there are four: CO2 balance, H2O balance, energy balance, entropy balance. I have heard that GCMs pass the hurdle of energy balance, but no one ever suggests how close this balance is (what is the plus/minus). And I am led to believe people even adjust the model to achieve energy balance — sounds circular to me. A fundamental concept in engineering is that numbers convey not only magnitudes, but also units and uncertainty. If one cannot specify and defend all three attributes of a number, then the number doesn’t tell us much. We can’t tell if something is functional, for example, without an uncertainty. What are the requisite balances for the other three? I’ll bet entropy balance isn’t even considered.
Please convince me otherwise by telling me how balance is calculated, i.e. what are the physical principles engaged to provide transport to and from the ocean surface, or any other exposed surface of water, to the atmosphere, or from plants to the atmosphere, or from ice and snow. What the results of this balance calculation are, i.e. how close is the balance and how does one verify it? What is the uncertainty, i.e. I mean doing a propagation of error through the modeling and convincing me that we understand the transport of H2O in this case to the requisite precision that an attribution or projection of warming requires.
“Why don’t you, and other interested parties actually correct general misunderstandings of the public along such lines?”
I am very often correcting this mistaken belief at WUWT. I don’t think it exists much in the world beyond.
It’s true that the real world does have a tendency to keep RH constant, and this was used as a basis for some early 1-D models. And people may use it as a guide in some discussions. But as a matter of simple fact, it is not used as an assumption in GCMs. It cannot be.
The GCMs have models for the source behaviour – how evaporation depends on temperature and wind, and similarly evapotranspiration. There is a huge base of empirical data on this; it is very important in agriculture, for example. And they have models for precipitation; basically testing if RH>1, though nucleation and supersaturation is considered. And there is even provision for reevaporation from raindrops. But beyond that, it is just conservation of mass as moisture is advected with the wind and passed from grid cell to gridcell. And that is just one reason why you can’t put an extra constraint on RH. There is no practical way to do it without breaching conservation of mass.
In rereading your comment I now recognize the points you are making. But there is need to clarify one more thing. As you say “ thereafter it is mass conservation with advection. The need to conserve mass is one reason why they just couldn’t set RH to be constant“. Water vapor is a small fraction of atmospheric mass, so there are two considerations. First, unless the mass balance has a very small uncertainty I don’t see why this prohibits setting RH as a parameter, in other words I don’t see how mass balance serves as a constraint on H2O balance. Second, there is this talk of adjustments of the model to achieve energy balance. Is this also done with mass balance? This still isn’t very reassuring.
Conservation of mass in CFD means conserving the mass of each component. “Conserving” really means accounting for it, through phase changes or possibly chemical reaction as well as movement.
There is a global enforcement of mass balance, as well as energy. You have to do this with any conserved quantity in a long term solution. The local equations representing the differential equation are intended to conserve, but there can be a drift over millions of iterations, and, again because of the mechanism to enforce conservation, this builds up and has to be countered. Changes are introduced which are very small compared with the local differences, on the margin of machine accuracy. There is in effect still compliance with local physics (conservation) but the requirement of global conservation is an equally important constraint which has to be met.
The models also deploy a hyperviscous atmosphere to suppress enstrophy, Kevin. You should talk to Jerry Browning about that. He’s published on it.
“That is exactly what they do.”
No they don’t. Many calculations are parameterised fits. The rest are simplified physics to make it feasible to calculate. I don’t understand why you cling to the mistaken belief the GCMs are anything but a projection of what we’ve seen already along with an amount of tuning resulting in values that seem feasible.
“and thereafter it is mass conservation with advection.”
You say this like its an indication the models must be doing the right thing. The same argument applies to energy conservation. If a model didn’t conserve the quantities then you could immediately throw it out as being incorrect. But that doesn’t, in any way mean the model is *correct*. Its a necessary but insufficient condition.
Our climate resembles climate models the same way I resemble those guys in GQ. If you don’t understand ask my wife who is sure to give an honest and unflattering answer.
Until we can determine what temperatures would have been without anthropogenic CO2 we have no way of validating any models which really means climate models are useless. Being prepared for any change in temperatures – up or down – is the only logical way of looking forward. Why do we waste so much time and money based on models?
And they can be invalidated by true observations.
The most logical way of looking forward is to look backwards at the previous ice age history in the various ice core reports.
Just looking at them initially it is fairly obvious that glacier buildup is a regular feature of the switch to a glaciation.
The other large feature is the regularity of the temperature change from ~9C down to~-4-5C over the course of several thousand years. Most interglacials show a regularity, but there are a couple where the warming only went sporadically up and down around about 4C for 20,000 years.
Overall, we are now roughly at the time for a new ice age to start. It’s highly unlikely that mankind will short circuit that. Particularly that the current double solar cycle will continue for another 20 years it may trigger the new ice age resulting in continuing temperature drops for centuries. Also, given the gradual decrease between ice ages means anything that warms up or slows or shortens an ice age would be very beneficial for people.
Since repeated episodes of reverse correlation over hundreds of years occur in the ice core reconstructions, it is abundantly clear that CO2’s influence on temperature, irrespective of its source, is zero. So we can determine it.
All the blather about CO2 induced anything is hypothetical bullshit and nothing more.
What the IPCC and the climate modelers don’t want to admit is that any model we can make, for the foreseeable future, can’t be finely grained enough to make a legitimate forecast. And big, broad generalizations are also useless when applied to the actual scale of the worlds climate- micrometers, not hundreds of kilometers.
The other huge problem is that differential equations are hard to make work in many situations they are used in. The problem is that the equations need to be integrated to make functional equations to estimate what the climate is doing. The integral has to have a constant added and they are often completely unknown so guesses are made.
Lorenz discovered that a single digit change in one of the variables in a fairly simple climate model made drastic changes in the result. In the end, he showed that even his fairly simple model could generate literally thousands of different results that were all equally possible. When graphed the results generated a quite beautiful butterfly-shaped image. The results were infinite, so the picture only showed one small part. It clearly showed that the results could not be used to generate useful results.
That leaves another problem, the machine epsilon for another day.
“What the IPCC and the climate modelers don’t want to admit is that any model we can make, for the foreseeable future, can’t be finely grained enough to make a legitimate forecast.”
What people here don’t want to admit is that GCMs are just weather forecast models, run for longer times. And they make legitimate weather forecasts.
“The other huge problem is that differential equations are hard to make work in many situations they are used in.”
Again, they work for weather forecasts. They are just CFD, which is a routine engineering activity.
“It clearly showed that the results could not be used to generate useful results.”
The butterfly is a useful result. It is the attractor, and for GCMs, as with most CFD, that is what you want to know. You don’t need to follow trajectories.
“They are just CFD, which is a routine engineering activity.” Which is why we used wind-tunnel testing and flight test to determine the actual aerodynamic performance of the aircraft. CFD is nice, it is even often helpful, but only a computer modeller would trust their lives, or anyone else’s, on it.
And the engineering models are predictive only within their calibration bounds.
Climate models are purported to be predictive far beyond their calibration bounds.
Tens of thousands of lives have already been lost through reliance on climate models.
“What people here don’t want to admit is that GCMs are just weather forecast models, run for longer times. And they make legitimate weather forecasts.”
What you don’t want to admit is that climate change is a signal in the model propagated forwards with every timestep for millions of timesteps and isn’t simply a daily forecast.
He also is shockingly silent about how far out those weather forecast models are accurate for. I once worked with someone who did financial transactions that were based on weather metrics for specific events on specific days. I asked him point blank how far out you could actually rely on a weather forecast.
He answered without any hesitation.
“Two days,” he said.
Enough said.
“What people here don’t want to admit is that GCMs are just weather forecast models, run for longer times. And they make legitimate weather forecasts.”
Fake analogy.
Weather models are updated with fresh data every few hours. If this is not done, they quickly run awry. Admit that, Nick.
I don’t have a lot of faith in long-range weather forecasts, despite having Doppler radar to track precipitation, geostationary weather satellites to image clouds, and a dense network of weather stations to track winds, barometric pressure, and temperatures. I’m of the opinion that at least with respect to precipitation, the false-positive numbers are higher than false-negatives. Might climate models have a similar asymmetry? Where is the rigorous error analysis of weather forecasts, let alone climate?
I think Nick would correctly argue that the weather produced over the longer term “gone awry forecast” is still believable valid weather and averages out over the long run.
But he apparently doesn’t understand that isn’t climate change. The climate change is the changing weather over time and that is driven by the per time step tiny changes a GCM has compared to a weather model.
Its those tiny changes that have to be accurate and the GCMs simply aren’t capable of calculating them.
” is still believable valid weather and averages out over the long run.”
Yes, its is. And it averages out in response to the forcings. That is what ulitmaiely drives weather. So when the forcings change, the weather changes. GCMs show how that averages out.
“GCMs show how that averages out.”
GCMS show how GCMs average out.
“So when the forcings change, the weather changes. GCMs show how that averages out.”
For starters that describes TCS, not ECS.
But you’re not even really doing that because there’s more to instantaneous forcings due to CO2 than what is modeled and fitted in the case of clouds.
The feedbacks need to be calculated in a changed atmosphere even for the instantaneous future weather and anything other than true first principals calculations simply wont cut it. So clouds alone break future weather projection.
Don’t weather forecasters say that their models are reasonable up to about two or three weeks out but after that they deteriorate quickly. Yet GCMs are supposedly saying they can tell us what will happen up to a century out?
They only make “legitimate” forecasts for regions and not globally. The global weather forecasts are abysmal–because they can’t define smaller differential volumes/time slices–too much computation. Even so, the regional weather forecasts rarely are accurate out to more than a week–longer maybe if there’s a weather block operating.
Of course, chaotic systems have a horizon of predictability. Beyond that horizon, it’s impossible to predict their behavior accurately. Apparently, weather models have a horizon of predictability of only two weeks.
I suspect that the models would work a whole lot better if they had a realistic empirical ECS included in them, say 0.7 C/doubling or thereabouts. Perhaps they could ask Lindzen and Spencer about which is the best number to use?
Of course if they did that CAGW would be precluded, and about 97% of all climate modeling jobs would disappear overnight.
A more realistic ECS would be zero.
Confirmation sought.
Many people for many years have claimed that modellers select a certain run among many to present for exercises such as the CMIP tests and that this is a source of bias that is not shown in final model uncertainty.
I have never seen a collection of model runs from one source, where all is kept constant from run to run except a few initial conditions. I am looking for a reference that shows such multiple runs.
Unless bloggers here can provide such a reference, it might be unknd and unscientific to make these allegations of selection bias. Sceptism is damaged if untrue claims are regurgitated without evidence, so be fair. Geoff S
Geoff, you’ll find a set of runs in Stainforth, et al., 2005 Uncertainty in predictions of the climate response to rising levels of greenhouse gases.
It’s pretty embarrassing.
A large fraction of the runs show cooling. They chalk the wrongness up to using a slab-ocean.
But of course, the slab-ocean didn’t compromise the right answers.
You can get the pdf here.
Thank you, Pat.
We hope you are keeping well.
This last week saw my birthday 80 and I immediately felt unwell.
Thank you for the link, just what I was seeking.
Geoff S
Doing well, Geoff, thanks. Trust you’re fine. Keep up the zinc, Vitamin D and rutin (quercetin). 🙂
For a few years already, I have seen most climate models as having positive feedbacks especially the water vapor feedback being considered to exist to an extent greater than what is actually the case. I see a major cause of this being climate models being “tuned” to hindcast the past, especially the 30 years before their hindcast-forecast transitions, and to do so without consideration of multidecadal oscillations. Most climate models, including CMIP3, CMIP5 and CMIP6 ones, have their last 30 years of hindcast / “historical” being during a time when multidecadal oscillations were temporarily contributing to global warming. I see lack of consideration for this causing climate models to credit positive feedbacks (especially the water vapor feedback) for warming that was actually caused by warming phase of multidecadal oscillations, which makes projections of warming (after the hindcast-forecast transition times of these models) being overstated due to modeling positive feedbacks (especially the water vapor feedback) as being greater than they actually are.
“I see a major cause of this being climate models being “tuned” to hindcast the past, especially the 30 years before their hindcast-forecast transitions”
That just isn’t true. It isn’t how tuning works. One obvious piece of evidence is that they don’t hindcast the last 30 years very well.
It is frustrating to find so many people who think they know all about tuning in GCMs, but don’t refer to any sources.
“It is frustrating to find so many people who think they know all about tuning in GCMs”
It’s frustrating that you believe tuning components of a model and then combining those components in the calculation results in a untuned calculation.
In 2007 Stainforth et al published ‘Confidence, uncertainty and decision -support relevance in climate predictions’ in the Philosophical Transactions of The Royal Society, 365, 2145-2161. Myles Allen was a co-author. In the preamble they say
“Here, our focus is solely on complex climate models as predictive tools on decadal and longer timescales. We argue for a reassessment of the role of such models when used for this purpose………..Complex climate models, as predictive tools for many variables and scales, cannot be meaningfully calibrated because they are simulating a never before experienced state of the system; the problem is one of extrapolation. It is therefore inappropriate to apply any of the currently available generic techniques which utilise observations to calibrate or weight models to produce forecast probabilities for the real world. To do so is misleading to the users of climate science in wider society.
And by your own admission the models are poor at hindcasting, so what are they good for?
From Stainforthetal 2005 : Uncertainty in predictions of the climate response to rising levels of greenhouse gases
So using the right combination of parameters in valid ranges, the model cooled and they threw the results out as obviously wrong.
So what is your understanding of how tuning works?
I’ll bet that if you put enough government grant money into a computer model, you’ll get the answer the government wants. The government is the bureaucrats, politicians, and the ever present lobbyists for all kinds of pet projects.