Guest essay by Michel de Rougemont
Without questioning the observed global warming or the emissions to the atmosphere of infrared-absorbing gases, three central issues remain without properly quantified responses, without which no other climate policy than that of a gradual adaptation can be justified: the sensitivity of the climate to human contributions, the disadvantages and the benefits for mankind resulting from a generally warmer climate, and the foreseeable evolution of the climate in consequence of unpredictable changes in human activity. To date, no instrumental observations made in our climate system can disentangle anthropogenic effects and natural variations.
We are left with conjectures and other speculation, both in the recent past and for the future. For this, climatologists develop models with which they can test their hypotheses. But these models are obviously overheating.
To verify their validity, the climate changes must be hindcasted (rebuilt a posteriori). It is then possible to compare the model results with the observed reality. Here is an example of such reconstructions, made with 102 CMIP5 models (Coupled Model Intercomparison Project, round Nr 5) from different institutions around the world, compared with series of observations made by balloons and satellites between 1979 and 2016.
Over that period, the mid troposphere temperature over the tropics (just taken here as one example) rose at a rate of 0.11 °C per decade (between 0.07 and 0.17), meanwhile, according to these models, it should have risen at the rate of 2.7 °C per decade (between 0.12 and 0.46). This difference is statistically significant. Since 1998, the global warming of the lower troposphere is only 0.02 °C per decade; this current period is called a “pause”, may be because it is hoped that the temperature will soon get more animated.
Another view of the same data represents their frequency distribution.
It looks like that model outputs are “normally” distributed around a mean, but only two of them overlap with the observations, the nearest one being of Russian origin.
It is unlikely that the observations are erroneous, even if the model-makers have the bad habit of speaking of “experiment” each time they run an “ensemble” of a model on their supercomputers, i.e. by performing several repetitions of a scenario with the same parameters while varying its initial conditions. Until proven otherwise, in vivo results shall prevail over those obtained in silico.
If various models are developed by competing teams, their results should not be expected to converge to an average. And if they did, then this mean should approach reality, a true value. The distribution observed and its sizeable inaccuracy indicate that these models may all be of the same nature, that their approximations of the climate system stem from similar methods and assumptions. There are only two extreme values, one of which (Russian-origin model) is hitting the target at a right place, thus deserving to be better understood. Is it a lucky occurrence, or that of deliberate considerations?
It is undeniable that, with two exceptions, these models overheat, by a factor of about 2.5. The explanations that are provided are not satisfactory because, too often, these results are examined with the help of other models. The only plausible explanation for this difference is that one or more systematic errors are committed that lead to an exaggeration, or amplify themselves as the model is running (by iteration alongside the time scale).
Yet, there are many possible causes of systematic errors. Erroneous interpretation of feedbacks (de Rougemont 2016), known phenomena that are computed only by approximation, the still too coarse modelling patterns tending to amplify instability (Bellprat and Doblas-Reyes 2016), known but not computable phenomena that cannot be considered, or are represented by very rudimentary black boxes, and all unknowns that remain so. Also, a systematic bias may result from the model calibration over a same recent reference period, and with the help of parameters singularly oriented to the emergence of greenhouse gases.
Another complicated but wrong explanation. Reader hang on!
Often, to explain observations in contradiction with the calculated projections, the anthropogenic part of the models will be eliminated from their algorithms (especially the effect of CO2), and computing runs will be made to ascertain what should have occurred without human influence during the period under consideration. As, for example, such a “return-to-Nature” model does not foresee a warming between 1979 and 2016, then the actual warming will be attributed at once to human activities (Santer et al. 2017). Such scientists even dare to talk about “evidence”. Human’s artefact including all his incompetencies, the model is then used as a reference: if the system does not respond as predicted by an “internally generated trend”, as calculated sui generis, any deviation from it will be deemed anthropogenic. This is a tautological way of doing: I am right because I am right. It is pervasive in scientific articles on the climate and in reviews, especially the fifth report of the IPCC. It shows probability distribution curves that are centred on an average of model results, like the blue bars in Figure 3 albeit more nicely Gaussian, without worrying about the accuracy of this average. It does not seem that climatologists, the activist ones of course, understand the demand for accuracy.
So, there is at least one certainty: that climate modelling is not [yet] adequate. Denying this, would be madness, only explicable by a collective hysteria of the apologists of the Causa Climatica. If it remained within the scientific community in search of an eluding truth, it would not be serious, quite the contrary. But when this goes into public policy, such as the Paris agreement and the commitments of its contracting parties, then it is more than very likely that the anticipated measures will be useless and that the huge resources to be involved will be wasted. It’s stupid and unfair.
Bellprat, Omar, and Francisco Doblas-Reyes. 2016. “Attribution of Extreme Weather and Climate Events Overestimated by Unreliable Climate Simulations.”
Geophysical Research Letters 43(5): 2158–64. http://doi.wiley.com/10.1002/2015GL067189.
de Rougemont, Michel. 2016. “Equilibrium Climate Sensitivity. An Estimate Based on a Simple Radiative Forcing and Feedback System.” : 1–8.
Santer, Benjamin D. et al. 2017. “Tropospheric Warming Over The Past Two Decades.”
Scientific Reports 7(1): 2336.
This article was published on the author’s blog: http://blog.mr-int.ch/?p=4303&lang=en
Michel de Rougemont, chemical engineer, Dr sc tech, is an independent consultant.
In his activities in fine chemicals and agriculture, he is confronted, without fearing them, to various environmental and safety challenges.
His book ‘Réarmer la raison’ is on sale at Amazon (in French only)
He maintains a blog blog.mr-int.ch, as well as a web site dedicated to the climate climate.mr.int.ch
He has no conflict of interest in relation with the subject of this paper.