Robert L. Bradley Jr. – June 23, 2021

“Climate modeling is central to climate science….” (Stephen Koonin, below)
When the history of climate modeling comes to be written in some distant future, the major story may well be how the easy, computable answer turned out to be the wrong one, resulting in overestimated warming and false scares from the enhanced (man-made) greenhouse effect.
Meanwhile, empirical and theoretical evidence is mounting toward this game-changing verdict despite the best efforts of the establishment to look the other way.
Consider a press release this month from the University of Colorado Boulder, “Warmer Clouds, Cooler Planet,” subtitled “precipitation-related ‘feedback’ cycle means models may overestimate warming.”
“Today’s climate models are showing more warmth than their predecessors,” the announcement begins.
But a paper published this week highlights how models may err on the side of too much warming: Earth’s warming clouds cool the surface more than anticipated, the German-led team reported in Nature Climate Change.
“Our work shows that the increase in climate sensitivity from the last generation of climate models should be taken with a huge grain of salt,” said CIRES Fellow Jennifer Kay, an associate professor of atmospheric and oceanic sciences at CU Boulder and co-author on the paper.
The press release goes on to state how incorporating this negative feedback will improve next-generation climate models, something that is of the utmost importance given the upcoming Sixth Assessment of the Intergovernmental Panel on Climate Change (IPCC). But will conflicted modelers and the politicized IPCC be upfront with the elephant in the room?
Background
Strong positive feedbacks from the release of carbon dioxide (CO2) and other manmade greenhouse gases (GHG) are what turn a modest and even positive warming into the opposite. The assumption has been that increased evaporation in a warmer world (from oceans, primarily) causes a strongly positive feedback, doubling or even tripling the primary warming.
In technical terms, water molecules trap heat, and clouds or vapor in the upper tropical troposphere – where the air is extremely dry – trap substantially more heat, thickening the greenhouse. How water inhabits this upper layer (≈30,000–50,000 feet) to either block (magnify) or release (diminish) the heat is in debate, leaving the sign of the externality unknown for climate economics. And it is the upper troposphere where climate models are data-confounding.
Assuming fixed relative atmospheric humidity allows modelers to invoke ceteris paribus against altered physical processes that might well negate the secondary warming. This controversial assumption opens the door for hyper-modeling that is at odds with reality. (For economists, the analogy would be assuming “perfect competition” to unleash hyper theorizing.)
For decades, model critics have questioned the simplified treatment of complexity. Meanwhile, climate models have predicted much more warming than has transpired.
Theoreticians have long been at odds with model technicians. MIT’s Richard Lindzen, author of Dynamics in Atmospheric Physics, has advanced different hypotheses about why water-vapor feedback is much less than modeled. Judith Curry, whose blog Climate Etc. is a leading source to follow physical-science and related developments, is another critic of high-sensitivity models.
“There’s a range of credible perspectives that I try to consider,” she states. “It’s a very complex problem, and we don’t have the answers yet.”
And now we have way too much confidence in some very dubious climate models and inadequate data sets. And we’re not really framing the problem broadly enough to … make credible projections about the range of things that we could possibly see in the 21st century.
Mainstream Recognition
Climate scientists know that climate models are extremely complicated and fragile. In What We Know About Climate Change (2018, p. 30), Kerry Emanuel of MIT explains:
Computer modeling of global climate is perhaps the most complex endeavor ever undertaken by humankind. A typical climate model consists of millions of lines of computer instructions designed to simulate an enormous range of physical phenomena….
Although the equations representing the physical and chemical processes in the climate system are well known, they cannot be solved exactly. …. The problem here is that many important processes happen at much smaller scales.
The parameterization problem is akin to the fallacies of macroeconomics, where the crucial causality of individual action is ignored. Microphysics is the driver of climate change, yet the equations are unsettled and sub-grid scale. Like macroeconomics, macro-climatology should have been highly qualified and demoted long ago.
My mentor Gerald North, former head of the climatology department at Texas A&M, had a number of observations about the crude, overrated nature of climate models back in 1998–99 that are still relevant today.
We do not know much about modeling climate. It is as though we are modeling a human being. Models are in position at last to tell us the creature has two arms and two legs, but we are being asked to cure cancer.
There is a good reason for a lack of consensus on the science. It is simply too early. The problem is difficult, and there are pitifully few ways to test climate models.
One has to fill in what goes on between 5 km and the surface. The standard way is through atmospheric models. I cannot make a better excuse.
The different models couple to the oceans differently. There is quite a bit of slack here (undetermined fudge factors). If a model is too sensitive, one can just couple in a little more ocean to make it agree with the record. This is why models with different sensitivities all seem to mock the record about equally well. (Modelers would be insulted by my explanation, but I think it is correct.)
[Model results] could also be sociological: getting the socially acceptable answer.
The IPCC 5th assessment (2013), the “official” or mainstream report, recognizes fundamental uncertainty while accepting model methodology and results at face value. “The complexity of models,” it is stated (p. 824), “has increased substantially since the IPCC First Assessment Report in 1990….”
However, every bit of added complexity, while intended to improve some aspect of simulated climate, also introduces new sources of possible error (e.g., via uncertain parameters) and new interactions between model components that may, if only temporarily, degrade a model’s simulation of other aspects of the climate system. Furthermore, despite the progress that has been made, scientific uncertainty regarding the details of many processes remains.
The humbling nature of climate modeling was publicized by The Economist in 2019. “Predicting the Climate Future is Riddled with Uncertainty” explained:
[Climate modeling] is a complicated process. A model’s code has to represent everything from the laws of thermodynamics to the intricacies of how air molecules interact with one another. Running it means performing quadrillions of mathematical operations a second—hence the need for supercomputers.
[S]uch models are crude. Millions of grid cells might sound a lot, but it means that an individual cell’s area, seen from above, is about 10,000 square kilometres, while an air or ocean cell may have a volume of as much as 100,000km3. Treating these enormous areas and volumes as points misses much detail.
Clouds, for instance, present a particular challenge to modellers. Depending on how they form and where, they can either warm or cool the climate. But a cloud is far smaller than even the smallest grid-cells, so its individual effect cannot be captured. The same is true of regional effects caused by things like topographic features or islands.
Building models is also made hard by lack of knowledge about the ways that carbon—the central atom in molecules of carbon dioxide and methane, the main heat-capturing greenhouse gases other than water vapour—moves through the environment.
“But researchers are doing the best they can,” The Economist concluded.
Climate models, in fact, are significantly overestimating warming, even by one-half. And the gap is widening as a coolish 2021 is well underway. And as for the future, anthropogenic warming is constrained by the logarithmic rather than linear effect of GHG forcing. The saturation effect means that as the atmosphere contains more CO2, the warming increase becomes less and less. The warming from a doubling of CO2, in other words, does not reoccur at a tripling but a quadrupling.
The mitigation window is rapidly closing, in other words, explaining the shrill language from prominent politicians. But it is the underlying climate models, not the climate itself, that is running out of time.
“Unsettled” Goes Mainstream
The crude methodology and false conclusions of climate modeling is emerging from the shadows. Physicist and computer expert Steven Koonin, in his influential Unsettled: What Climate Science Tells Us, What it Doesn’t, and Why It Matters (chapter 4) explains:
Climate modeling is central to climate science…. Yet many important phenomena occur on scales smaller than the 100 km (60 mile) grid size (such as mountains, clouds, and thunderstorms), and so researchers must make “subgrid” assumptions to build a complete model….
Since the results generally don’t much look like the climate system we observe, modelers then adjust (“tune”) these parameters to get a better match with some features of the real climate system.
Undertuning leaves the model unrealistic, but overtuning “risks cooking the books—that is, predetermining the answer,” adds Koonin. He then quotes from a paper co-authored by 15 world-class modelers:
… tuning is often seen as an unavoidable but dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature…. Tuning may be seen indeed as an unspeakable way to compensate for model errors.
Conclusion
Climate modeling has arguably been worse than nothing because false information has been presented as true and “consensus.” Alarmism and disruptive policy activism (forced substitution of inferior energies; challenges to lifestyle norms) have taken on a life of their own. Fire, ready, aim has substituted for prudence, from science to public policy.
Data continue to confound naïve climate models. Very difficult theory is slowly but surely explaining why. The climate debate is back to the physical science, where it never should have left.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Confirmation sought.
Many people for many years have claimed that modellers select a certain run among many to present for exercises such as the CMIP tests and that this is a source of bias that is not shown in final model uncertainty.
I have never seen a collection of model runs from one source, where all is kept constant from run to run except a few initial conditions. I am looking for a reference that shows such multiple runs.
Unless bloggers here can provide such a reference, it might be unknd and unscientific to make these allegations of selection bias. Sceptism is damaged if untrue claims are regurgitated without evidence, so be fair. Geoff S
Geoff, you’ll find a set of runs in Stainforth, et al., 2005 Uncertainty in predictions of the climate response to rising levels of greenhouse gases.
It’s pretty embarrassing.
A large fraction of the runs show cooling. They chalk the wrongness up to using a slab-ocean.
But of course, the slab-ocean didn’t compromise the right answers.
You can get the pdf here.
Thank you, Pat.
We hope you are keeping well.
This last week saw my birthday 80 and I immediately felt unwell.
Thank you for the link, just what I was seeking.
Geoff S
Doing well, Geoff, thanks. Trust you’re fine. Keep up the zinc, Vitamin D and rutin (quercetin). 🙂
For a few years already, I have seen most climate models as having positive feedbacks especially the water vapor feedback being considered to exist to an extent greater than what is actually the case. I see a major cause of this being climate models being “tuned” to hindcast the past, especially the 30 years before their hindcast-forecast transitions, and to do so without consideration of multidecadal oscillations. Most climate models, including CMIP3, CMIP5 and CMIP6 ones, have their last 30 years of hindcast / “historical” being during a time when multidecadal oscillations were temporarily contributing to global warming. I see lack of consideration for this causing climate models to credit positive feedbacks (especially the water vapor feedback) for warming that was actually caused by warming phase of multidecadal oscillations, which makes projections of warming (after the hindcast-forecast transition times of these models) being overstated due to modeling positive feedbacks (especially the water vapor feedback) as being greater than they actually are.
“I see a major cause of this being climate models being “tuned” to hindcast the past, especially the 30 years before their hindcast-forecast transitions”
That just isn’t true. It isn’t how tuning works. One obvious piece of evidence is that they don’t hindcast the last 30 years very well.
It is frustrating to find so many people who think they know all about tuning in GCMs, but don’t refer to any sources.
“It is frustrating to find so many people who think they know all about tuning in GCMs”
It’s frustrating that you believe tuning components of a model and then combining those components in the calculation results in a untuned calculation.
In 2007 Stainforth et al published ‘Confidence, uncertainty and decision -support relevance in climate predictions’ in the Philosophical Transactions of The Royal Society, 365, 2145-2161. Myles Allen was a co-author. In the preamble they say
“Here, our focus is solely on complex climate models as predictive tools on decadal and longer timescales. We argue for a reassessment of the role of such models when used for this purpose………..Complex climate models, as predictive tools for many variables and scales, cannot be meaningfully calibrated because they are simulating a never before experienced state of the system; the problem is one of extrapolation. It is therefore inappropriate to apply any of the currently available generic techniques which utilise observations to calibrate or weight models to produce forecast probabilities for the real world. To do so is misleading to the users of climate science in wider society.
And by your own admission the models are poor at hindcasting, so what are they good for?
From Stainforthetal 2005 : Uncertainty in predictions of the climate response to rising levels of greenhouse gases
So using the right combination of parameters in valid ranges, the model cooled and they threw the results out as obviously wrong.
So what is your understanding of how tuning works?
I’ll bet that if you put enough government grant money into a computer model, you’ll get the answer the government wants. The government is the bureaucrats, politicians, and the ever present lobbyists for all kinds of pet projects.