How Many Times do Useless Climate Models have to be Killed before they Die?

“A Climate Modeller Spills the Beans”
was posted by

Pat

September 25, 2019 at 9:02 am

clip_image002

Guest post by Mike Jonas,

Quadrant Online has just published a remarkable article – A Climate Modeller Spills the Beans – in which a highly-qualified climate scientist and modeller makes it abundantly clear that the climate models, as coded and used currently, can never predict future climate.

The article is at https://quadrant.org.au/opinion/doomed-planet/2019/09/a-climate-modeller-spills-the-beans/ and the link appeared in comments by Pat in earlier WUWT posts. [Thanks, Pat .. I thought the information was worthy of an article here in its own right, so this is it.]

Dr. Mototaka Nakamura is a top-level oceanographer and meteorologist who worked from 1990 to 2014 on cloud dynamics, and on atmospheric and ocean flows. He has published about 20 climate papers on fluid dynamics, and he has now quite simply had enough of the shenanigans that pass for climate science and climate modelling.

In June, he put out a small book in Japanese on the sorry state of climate science, titled “Confessions of a climate scientist: the global warming hypothesis is an unproven hypothesis“. But behind that mild title is a hard-hitting exposure of the uselessness of climate models for forecasting. In a sane world, it would kill the current set of climate models absolutely stone dead. But of course, at present the world is anything but sane.

Dr Nakamura goes into detail about many of the failures of climate models. Some of those failures are well-known at WUWT, and I suspect that they are just as well-known by some of the modellers. eg.

These models completely lack some critically important climate processes and feedbacks, and represent some other critically important climate processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful climate prediction.

I myself used to use climate simulation models for scientific studies, not for predictions, and learned about their problems and limitations in the process.

and

Ad hoc representation of clouds may be the greatest source of uncertainty in climate prediction. A profound fact is that only a very small change, so small that it cannot be measured accurately…in the global cloud characteristics can completely offset the warming effect of the doubled atmospheric CO2.

and

Anyone studying real cloud formation and then the treatment in climate models would be flabbergasted by the perfunctory treatment of clouds in the models.

… but it is well worth reading the full Quadrant Online article – and sending the link on to all your friends and of course to all your enemies.

Thanks, Pat.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

172 Comments
Inline Feedbacks
View all comments
James Zott
September 29, 2019 7:55 am

To paraphrase:
Models will often completely lack some critically important processes and feedbacks, and represent some other critically important processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful prediction. We use simulation models for scientific studies, not for predictions, and learn about the models problems and limitations in the process.

Funny I think I learned this in my compter modeling / simulation classes.

Clyde Spencer
Reply to  James Zott
September 29, 2019 11:00 am

James Zott
+1

Jim G
September 29, 2019 8:01 am

In proper application of the scientific method, we would call a climate model a hypothesis. The gold standard test of that hypothesis is whether it accurately predicts future outcomes, as it is designed to do. If it does not, it is falsified. We have moved into an age where that tried and true path has been abandoned, and ‘science’ as it presently works in the climate ‘sciences’ is allowed to ignore the falsification and is allowed to adjust the data to fit the model. What’s worse, the model’s advocates are allowed to make all sorts of frightening, catastrophic predictions to force scientists to join a nonexistent ‘consensus’.

Mike Bryant
September 29, 2019 8:03 am

If weather forecasting models cannot be trusted beyond a few weeks, why does anyone think that climate forecasting models CAN be? Do you really have to be a scientist, climate or otherwise to grasp this simple truth?

bwegher
September 29, 2019 10:01 am

Climate models are simulations of reality.
Some climate models resemble Earth’s climate in the same way that Pinocchio resembles a real boy.
Other climate models resemble Earth’s climate in the same way that Cinderella resembles a real girl.

Gamecock
Reply to  bwegher
September 29, 2019 2:41 pm

Good post. But . . . .

“Climate models are simulations of reality.”

They are simulations of the author’s ideas. They have no connection with reality.

But then reality is a creation of the right wing to confuse people.

observa
September 29, 2019 10:15 am

Presumably there are still a few boffins beavering away somewhere with econometrics and models of the world economy. Just need to tweak a few more parameters and add a couple more ceteris paribuses and all will be revealed. Almost put the finishing touches on the black swan stagflation of the seventies and then it’s on to the GFC…

splitdog homee
September 29, 2019 10:24 am

The Climate Nazis love models. It allows them to make all kinds of unfounded predictions and they can play with computers what is not to like. BTW “Climate Nazis” is my new favorite term.

John Garrett
September 29, 2019 10:53 am

Anybody with any experience of building or using computer-based models for the purpose of predicting the future of highly complex, multi-variate, non-linear phenomena knows better than to take the output seriously.

Kevin kilty
September 29, 2019 11:22 am

Pat, I appreciate your guest blog, but I downloaded the Kindle version and have read the English portions several times, now; and you didn’t quote the specifics of why Mototaka feels the models are inadequate.

He takes a very restricted view, largely two issues:
1. The inadequate modeling of the ocean over poor resolution, and
2. Parameterizations

Under topic one he states a very specific complaint which is that the ocean momentum/salt/thermal diffusion is wrong. For example, momentum diffusion produces overly smoothed fields, and doesn’t produce any negative viscosity phenomena. Would a coupled ocean/atmosphere model that didn’t produce western boundary currents in the model suggest a problem?

I think there are about a dozen credible and pertinent criticisms, and would like to know what his critics think of these.

Already the complaints against him include that he hasn’t worked in climate science for 9 years, so his thoughts are out of date. Well, Lagrange hasn’t done any mechanics for a good 250 years; are his views of mechanics out of date? What would be more important is knowing whether some issues he complains about have been rectified or are they still issues?

William Astley
September 29, 2019 11:39 am

The IPCC general circulation models (GCM) have more than a 100 free variables that are subjectively set to produce the ‘predicted’ warming.

What anchors the subjective GCM calculation that produces warming of 3C to 5C, and our thoughts is the Hanson calculated no feedback warming for a doubling of atmospheric CO2 of 1.2C.

It is interesting that there were peer reviewed “one-dimensional” studies that predicted a warming for a doubling of atmospheric CO2 of 0.24C and 0.17C that were published at the same time as Hansen’s paper.

The so-called one-dimensional radiative convection model is a toy model. Toy models are widely used in science and are very useful if there are conceptually correct.

The problem is the cult of CAGW, Hansen in this case:

1) fixed the lapse rate for the study (it is a physical fact that the additional CO2 increases the convection cooling of the atmosphere which offsets the warming by reducing the lapse rate so there will more warming higher in the atmosphere and significantly less surface warming) and

2) ignored the fact that the atmosphere is saturated with water vapor in the tropics which greatly reduces the greenhouse effect in the lower atmosphere due to the infrared frequency overlap of water vapour and CO2,

https://drive.google.com/file/d/0B74u5vgGLaWoOEJhcUZBNzFBd3M/view?pli=1

http://hockeyschtick.blogspot.ca/2015/07/collapse-of-agw-theory-of-ipcc-most.html

..In the 1DRCM studies, the most basic assumption is the fixed lapse rate of 6.5K/km for 1xCO2 and 2xCO2.

There is no guarantee, however, for the same lapse rate maintained in the perturbed atmosphere with 2xCO2 [Chylek & Kiehl, 1981; Sinha, 1995]. Therefore, the lapse rate for 2xCO2 is a parameter requiring a sensitivity analysis as shown in Fig.1. In the figure, line B shows the FLRA giving a uniform warming for the troposphere and the surface. Since the CS (FAH) greatly changes with a minute variation of the lapse rate for 2xCO2, the computed results of the 1DRCM studies in Table 1 are theoretically meaningless along with the failure of the
FLRA.

In physical reality, the surface climate sensitivity is 0.1~0.2K from the energy budget of the earth and the surface radiative forcing of 1.1W.m2 for 2xCO2.

Since there is no positive feedback from water vapor and ice albedo at the surface, the zero feedback climate sensitivity CS (FAH) is also 0.1~0.2K. A 1K warming occurs in responding to the radiative forcing of 3.7W/m2 for 2xCO2 at the effective radiation height of 5km. This gives the slightly reduced lapse rate of 6.3K/km from 6.5K/km as shown in Fig.2.

In the physical reality with a bold line in Fig.2, the surface temperature increases as much as 0.1~0.2K with the slightly decreased lapse rate of 6.3K/km from 6.5K/km.

Since the CS (FAH) is negligible small at the surface, there is no water vapor and ice albedo feedback which are large positive feedbacks in the 3DGCMs studies of the IPCC.

…. (c) More than 100 parameters are utilized in the 3DGCMs (William: Three dimensional General Circulation Models, silly toy models) giving the canonical climate sensitivity of 3K claimed by the IPCC with the tuning of them.

The followings are supporting data for the Kimoto lapse rate theory above.

(A) Kiehl & Ramanathan (1982) shows the following radiative forcing for 2xCO2. Radiative forcing at the tropopause: 3.7W/m2. Radiative forcing at the surface: 0.55~1.56W/m2 (averaged 1.1W/m2).

This denies the FLRA giving the uniform warming throughout the troposphere in the 1DRCM and the 3DGCMs studies.

(B) Newell & Dopplick (1979) obtained a climate sensitivity of 0.24K considering the evaporation cooling from the surface of the ocean.

(C) Ramanathan (1981) shows the surface temperature increase of 0.17K with the direct heating of 1.2W/m2 for 2xCO2 at the surface.

Alan D. McIntire
September 29, 2019 12:20 pm

Speaking of BAD climate models, browsing the internet regarding another issue led me to this
January 4, 2008 post at “Climate Audit”

https://climateaudit.org/2008/01/04/ipcc-on-radiative-forcing-1-ar11990/

The item that caught my eye was this quote regarding IPCC AR1:

“3.3.2 Water Vapor Feedback
… an increase in one greenhouse gas (CO2) induces an increase in yet another greenhouse gas (water vapor) resulting in a positive feedback…..”

To be more specific on this point, Raval and Ramanathan 1989 have recently employed satellite data to quantify the temperature dependence of the water vapor greenhouse effect. From their results, it readily follows (Cess, 1989) that water vapor feedback reduces ΔF/ΔTs from the prior value of 2.2 wm-2 K-1 to 2.3 wm-2 K-1. This in turn increases λ from 0.3 K m2 w-1 to 0.32 K m2 w-1 and thus increases the global warming from ΔTs= 1.2 deg C to 1.7 deg C.”

For the sake of argument, I’ll not dispute that additional warming might lead to additional water vapor, but this SECOND part below is wrong on the face of it!

“There is yet a further amplification caused by the increased water vapor. Since water vapor also absorbs solar radiation, water vapor feedback leads to an additional heating of the climate system through enhanced absorption of solar radiation.In terms of ΔS/ΔTs as appears within the expression for λ, this results in ΔS/ΔTs = 0.2 wm-2 K-1 (Cess et al 1989) so that λ is now 0.48 Km2w-1 while ΔTs=1.9 deg C. The poinst is that water vapor feedback has amplified the initial global warming of 1.2 deg C to 1.9 deg C, an amplification factor of 1.6.”

What is being described is zero or negative feedback. If the atmosphere both absorbs and emits radiation from the Sun, the net greenhouse effect of the part of the radiation absorbed directly by the atmosphere is zero. If the atmosphere absorbs radiation from the Sun but not the infrared from the ground, you get a NEGATIVE greenhouse effect, as happens for some frequencies on Saturn’s moon, Titan. That additional postulate 0.2° C factor hypothetically INCREASING warming would actually REDUCE warming by that 0.2 °C factor, from 1.7 ° C to 1.5° C.

This of course ignores negative feedbacks due do a larger fraction of the additional wattage going into latent heat. The evaporation of additional water does not come free- additional wattage is eaten up in the latent heat of evaporation and convection to carry that water vapor into the atmosphere.

The old professor
September 29, 2019 12:34 pm

I was having a conversation with a doctor. He remarked, “Just because you are a professor of computer science doesn’t mean you automatically know what their models are.”
I have taught computer modeling since about 1967-68. There are two main problems in computer models. The first is obvious, simplification. A model must simplify. The problem case is oversimplification. Paramaterization of cloud cover is a simplification.
The second problem is a simplification of data input. When significant data is ignored — garbage in, garbage out.
Either kind of simplification (necessary because of limited computer power) is problematic.
Modeling using 1960’s-2015 methods is almost surely doomed to fail. However, using a self-taught AI given all the data (including stuff not believed to be important) may well come up with a way to predict 10 years out. No one is doing this as far as I know.

Reply to  The old professor
September 30, 2019 3:02 am

Well, AI coded in FORTRAN would be a bit hard to do.

Mark Pawelek
September 29, 2019 12:40 pm

He has an inexpensive ebook available too. Half English, half Japanese. When you feed the text into Google translate it translates it. Don’t give the translation service html. It will break. I handles a mixture of Japanese and English, returning English. The book has no images.

Mark Pawelek
September 29, 2019 1:01 pm

His views summarized:

Now, I must emphasize here that my skepticism on the “global warming hypothesis” is targeted on the “catastrophic” part of the hypothesis and not on the “global warming” per se. That is, there is no doubt that increased carbon dioxide concentration in the atmosphere does have some warming effect on the lower troposphere (about 0.5 degrees Kelvin for a doubling from the pre-industrial revolution era, according to true experts), although it has not been proven that the warming effect actually results in a rise in the global mean surface temperature, because of the extremely complex processes operating in the real climate system, many of which are represented in perfunctory manner at best or ignored altogether in climate simulation models.

September 29, 2019 1:52 pm

Thanks for this article Mike

I will incorporate this in my future newspaper articles

Mark Pawelek
September 29, 2019 2:39 pm

Book excerpt, translated (by Google Translate) from Japanese:

I heard a story that seems to be common. At that time she had a [male] friend from Taiwan who worked hard to obtain a PhD at the University of Illinois under professor Michael Schlesinger, a climatologist. (I don’t know Professor Schlesinger directly, but one of my teachers, Professor Peter Stone, MIT, said Professor Schlesinger, who had made an outrageous claim in a public debate over a paper, “That guy is a fool. “I have heard that I was down.” My friend was doing research on climate warming as a topic of his IPCC report under the guidance of Professor Schlesinger. As part of that, he was experimenting with a CO2 doubling scenario using a climate simulation model. It seems that the friend was worried and confided to her, but under the guidance of Professor Schlesinger, it was possible to change the various parameter values ​​in the simulation model and repeat the experiment until the professor’s desired global average temperature rise value came out It was. The friend said that she was worried about the question, “This isn’t science?”

In fact, there are many parameters in a climate simulation model, and these values ​​are not determined based on scientific evidence, but are often determined by the convenience of the model user. That is, the value is determined for the purpose of “tuning” the behavior of the simulation model. This is inevitable as many physical and chemical processes that the climate model cannot express with physical and chemical equations are represented using the “just kidding” representation method. I myself tuned the climate model using various parameter values. Naturally, the climate simulation model is a tool for academic research that has been developed to be used for scientifically meaningful research within the limits of such models, and is used for climate change prediction. It is not something that should be done. It seems that the serious graduate student knew the simple fact that there was no way to make meaningful predictions by changing the parameter values ​​that could be chosen arbitrarily, and the results would change. That is why he was seriously troubled by his supervisor’s command to continue tuning the model until the desired result was achieved.

Bruce of Newcastle
September 29, 2019 3:00 pm

I suspect the models would do far better if they had the correct value for climate sensitivity entered into them. That is supported by the Russian model, which John Christy points out does a reasonable job at matching the temperature record.

Unfortunately for them if the ensemble modellers did include the correct sensitivity they’d show that CAGW can’t happen, and they would lose their funding.

September 29, 2019 8:46 pm

My Masters Thesis involved adding biases (fudge factors) into a tropical cyclone forecast model to better initialize the current motion of the storm. We had to add these fudge factors, because (1) the tropical storm science was limited, and (2) worse yet, the data was very limited. It’s all about the science stupid and all about the data stupid. Climate modeling has the same and even greater challenges because there are even more feedback mechanisms involved. Until we can adequately model past Ice Ages, we have very little hope of achieving reasonable climate modeling. Remember what Winston Churchill said, “The farther backward you can look, the farther forward you can see.”

Brian Perrin
September 30, 2019 4:39 am

Here is a thought about computer modelling to be commented on please.
The lengthy discussions over the years here and elsewhere consider digital computer models.
There is an alternative:- analog models. And I think we live in one and have all the parameters to hand if we measure them! Time frame is another issue which we cannot change with the model we live in. But maybe, maybe by honing our skills to interpret the past both of the ‘analog’ earth and the sun would be a better way to spend our wealth.

‘Old’ Nick.

TTBob
September 30, 2019 12:09 pm

Not an issue!

Take this article:

https://www.nature.com/articles/s41561-019-0383-x

>The Hadley circulation has large climate impacts at low latitudes by transferring heat and moisture between the tropics and subtropics. Climate projections show a robust weakening of the Northern Hemisphere Hadley circulation by the end of the twenty-first century. Over the past several decades, however, atmospheric reanalyses indicate a strengthening of the Hadley circulation. Here we show that the strengthening of the circulation in the Northern Hemisphere is not seen in climate models; instead, these models simulate a weakening of the circulation in the past 40 years. Using observations and a large ensemble of model simulations we elucidate this discrepancy between climate models and reanalyses, and show that it does not stem from internal climate variability or biases in climate models, but appears related to artefacts in the representation of latent heating in the reanalyses. Our results highlight the role of anthropogenic emissions in the recent slowdown of the atmospheric circulation, which is projected to continue in coming decades, and question the reliability of reanalyses for estimating trends in the Hadley circulation.

So, apparently, yet again the reanalysis shows the opposite trend with the models…. too bad for reanalysis! It must be biased and has to be corrected! Not models, no – they are perfect; the reanalysis is flawed!

September 30, 2019 12:34 pm

The problem here, of course, is the idolatry of ideology combined with a system of unimaginable corporate graft infused with a white-knuckled lust for concentrated, total, unaccountable power over virtually every aspect of human existence and choice (in even the most trivial and mundane realms of life) at the international, national and state (California in particular) levels where the Left (and the Left, in the 20th century to the present, has been alternatively brown, black, red and now, in recent decades, green, but its the same Left seeking very much the same ends, aims and goals and driven by the same underlying collectivist vision) dominates and controls discourse, the creation, interpretation and dissemination of information and the acceptable parameters of thought, discussion and debate.

Having overwhelmed and smothered the academic humanaties and social sciences several generations ago, the totemic spirits of political correctness now seek new conquests even within the natural and hard sciences. This is not new. Lysenkoism in the Soviet Union (and the very idea of Marxism as a scientific discipline known as “scientific socialism”) and concepts of “Aryan” physics, chemistry or biology within Nazi ideology, are all representative of the inevitable metastasization of the underlying totalitarian, collectivist mentality and zeitgeist once in social, cultural, political and legal place common to all manifestations of the progressive vision, that vision, Dr. Sowell tells us of “the Anointed.”

The idea, in other words, of progressive ideology of one kind or another driving, animating and determining the nature and acceptable range of discourse within science is not a novel development, but rather the reemergence, under the banner of “saving the planet,” “the climate crisis” and “sustainability” of ideas long fundamental to the leftist/progressive project of “changing the world.”

And if the world has to be razed to the ground, and its ashes scattered, to do so, then so be it. To “save the planet,” bring to pass a new golden world of “hope and change,” and redeem and purify humankind in a future world of “social justice” and Edenic felicity, what lie is too big to tell, what crime to awful to contemplate, what means to that end beyond possibility?

What data too clear to fudge…or hide?

tom0mason
October 1, 2019 7:38 am

The underpinning of the climate models are based on the suppositions made by a chemist-cum-physist Svante Arrhenius, and his ideas about the influence of CO2 and carbonic acid in the atmosphere.
He was wrong then, the models are wrong now.
End of story!