Climate models overheat

Guest essay by Michel de Rougemont

Without questioning the observed global warming or the emissions to the atmosphere of infrared-absorbing gases, three central issues remain without properly quantified responses, without which no other climate policy than that of a gradual adaptation can be justified: the sensitivity of the climate to human contributions, the disadvantages and the benefits for mankind resulting from a generally warmer climate, and the foreseeable evolution of the climate in consequence of unpredictable changes in human activity. To date, no instrumental observations made in our climate system can disentangle anthropogenic effects and natural variations.

We are left with conjectures and other speculation, both in the recent past and for the future. For this, climatologists develop models with which they can test their hypotheses. But these models are obviously overheating.

To verify their validity, the climate changes must be hindcasted (rebuilt a posteriori). It is then possible to compare the model results with the observed reality. Here is an example of such reconstructions, made with 102 CMIP5 models (Coupled Model Intercomparison Project, round Nr 5) from different institutions around the world, compared with series of observations made by balloons and satellites between 1979 and 2016.

clip_image004
Figure 1 Comparison calculated-observed of temperature anomalies in the mid troposphere. The average red line was smoothed over a 5 years running period. Source : J. C. Christy, University of Alabama at Huntsville. Presented to the US Senate.

Over that period, the mid troposphere temperature over the tropics (just taken here as one example) rose at a rate of 0.11 °C per decade (between 0.07 and 0.17), meanwhile, according to these models, it should have risen at the rate of 2.7 °C per decade (between 0.12 and 0.46). This difference is statistically significant. Since 1998, the global warming of the lower troposphere is only 0.02 °C per decade; this current period is called a “pause”, may be because it is hoped that the temperature will soon get more animated.

clip_image006
Figure 2 Warming rates of the mid troposphere over the tropics between 1979 and 2016, in °C per decade. Hindcast of 102 models (left, in blue) and observed in 13 observation series (balloons and satellites, right, yellow) The dashed lines indicate a 95% confidence interval for each of these groups Source: J.C. Christy, University of Alabama at Huntsville.

Another view of the same data represents their frequency distribution.

clip_image008
Figure 3 Warming rates of the mid troposphere over the tropics between 1979 and 2016. Frequency distribution of 102 models (in blue) and 13 observation series (in yellow). Data source from previous graph.

It looks like that model outputs are “normally” distributed around a mean, but only two of them overlap with the observations, the nearest one being of Russian origin.

Discussion

It is unlikely that the observations are erroneous, even if the model-makers have the bad habit of speaking of “experiment” each time they run an “ensemble” of a model on their supercomputers, i.e. by performing several repetitions of a scenario with the same parameters while varying its initial conditions. Until proven otherwise, in vivo results shall prevail over those obtained in silico.

If various models are developed by competing teams, their results should not be expected to converge to an average. And if they did, then this mean should approach reality, a true value. The distribution observed and its sizeable inaccuracy indicate that these models may all be of the same nature, that their approximations of the climate system stem from similar methods and assumptions. There are only two extreme values, one of which (Russian-origin model) is hitting the target at a right place, thus deserving to be better understood. Is it a lucky occurrence, or that of deliberate considerations?

It is undeniable that, with two exceptions, these models overheat, by a factor of about 2.5. The explanations that are provided are not satisfactory because, too often, these results are examined with the help of other models. The only plausible explanation for this difference is that one or more systematic errors are committed that lead to an exaggeration, or amplify themselves as the model is running (by iteration alongside the time scale).

Yet, there are many possible causes of systematic errors. Erroneous interpretation of feedbacks (de Rougemont 2016), known phenomena that are computed only by approximation, the still too coarse modelling patterns tending to amplify instability (Bellprat and Doblas-Reyes 2016), known but not computable phenomena that cannot be considered, or are represented by very rudimentary black boxes, and all unknowns that remain so. Also, a systematic bias may result from the model calibration over a same recent reference period, and with the help of parameters singularly oriented to the emergence of greenhouse gases.

Another complicated but wrong explanation. Reader hang on!

Often, to explain observations in contradiction with the calculated projections, the anthropogenic part of the models will be eliminated from their algorithms (especially the effect of CO2), and computing runs will be made to ascertain what should have occurred without human influence during the period under consideration. As, for example, such a “return-to-Nature” model does not foresee a warming between 1979 and 2016, then the actual warming will be attributed at once to human activities (Santer et al. 2017). Such scientists even dare to talk about “evidence”. Human’s artefact including all his incompetencies, the model is then used as a reference: if the system does not respond as predicted by an “internally generated trend”, as calculated sui generis, any deviation from it will be deemed anthropogenic. This is a tautological way of doing: I am right because I am right. It is pervasive in scientific articles on the climate and in reviews, especially the fifth report of the IPCC. It shows probability distribution curves that are centred on an average of model results, like the blue bars in Figure 3 albeit more nicely Gaussian, without worrying about the accuracy of this average. It does not seem that climatologists, the activist ones of course, understand the demand for accuracy.

So, there is at least one certainty: that climate modelling is not [yet] adequate. Denying this, would be madness, only explicable by a collective hysteria of the apologists of the Causa Climatica. If it remained within the scientific community in search of an eluding truth, it would not be serious, quite the contrary. But when this goes into public policy, such as the Paris agreement and the commitments of its contracting parties, then it is more than very likely that the anticipated measures will be useless and that the huge resources to be involved will be wasted. It’s stupid and unfair.

Bibliography

Bellprat, Omar, and Francisco Doblas-Reyes. 2016. “Attribution of Extreme Weather and Climate Events Overestimated by Unreliable Climate Simulations.”

Geophysical Research Letters 43(5): 2158–64. http://doi.wiley.com/10.1002/2015GL067189.

de Rougemont, Michel. 2016. “Equilibrium Climate Sensitivity. An Estimate Based on a Simple Radiative Forcing and Feedback System.” : 1–8.

http://bit.ly/2xG50Tn.

Santer, Benjamin D. et al. 2017. “Tropospheric Warming Over The Past Two Decades.”

Scientific Reports 7(1): 2336.

http://www.nature.com/articles/s41598-017-02520-7.

This article was published on the author’s blog: http://blog.mr-int.ch/?p=4303&lang=en


clip_image010About the author:

Michel de Rougemont, chemical engineer, Dr sc tech, is an independent consultant.

In his activities in fine chemicals and agriculture, he is confronted, without fearing them, to various environmental and safety challenges.

His book ‘Réarmer la raison’ is on sale at Amazon (in French only)

He maintains a blog blog.mr-int.ch, as well as a web site dedicated to the climate climate.mr.int.ch

E-mail: michel.de.rougemont@mr-int.ch

He has no conflict of interest in relation with the subject of this paper.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

112 Comments
Inline Feedbacks
View all comments
September 30, 2017 10:25 pm

There has to be a reason the IPCC continues to use the full ensemble of models or model runs, bit I don’t know if it. And why don’t either warmists or skeptics show and discuss the few runs that approximate observations?
So, why are ALL used and what do the matching few tell us?

Alcheson
Reply to  douglasproctor
September 30, 2017 11:01 pm

I think there is likely two main reasons they continue to use all the models. Reason one… none of the Climate Science groups want their own model to be labeled as failed as it would start internal wars. Number 2, the models that would be discarded are all of the hottest running ones. If only the two or three coolest ones are kept… CAGW is gone, especially since even those are running on the hot side. CAGW is NOT about science, it is about establishing a new world order and bringing about the end of capitalism. None of the conclusions and claims of certain doom make much sense from a purely Science point of view. But viewed from the lens of advancing the Progressive agenda, EVERYTHING falls into place nicely, even the constant need to keep coming up with “new” corrections to adjust the data.

richard verney
Reply to  Alcheson
October 1, 2017 1:55 am

The second is the entire reason.
Virtually every one of the models is running high. If one were to remove the warmest of the models, that would probably remove 70% of the models, and then the average of the remaining ensemble would be giving a Climate Sensitivity of around 1.5 to 1.8 degC per doubling.
As you say cAGW would be over.
This, of course, is why it would be very significant should the pause reappear in 2018 and before AR6 is written.

fmassen
October 1, 2017 1:19 am

George E. Smith’s first comment concerning the guest’s English style:
I am often baffled by the arrogance of (US) native English speaking people in judging foreigner’s more approximative abilities in that language, especially when the critics may have very poor abilities in any non-English language. Please appreciate that the whole scientific community accepts English as the lingua franca and does it’s best to communicate in English. As a Luxembourger I speak 4 languages (+ some left-over abilities in Latin), so I guess it would be unfair to expect a mastering in each of these equal to that of a native speaker.

richard verney
October 1, 2017 1:43 am

To verify their validity, the climate changes must be hindcasted (rebuilt a posteriori). It is then possible to compare the model results with the observed reality. Here is an example of such reconstructions, made with 102 CMIP5 models (Coupled Model Intercomparison Project, round Nr 5) from different institutions around the world, compared with series of observations made by balloons and satellites between 1979 and 2016.

Why do we hindcast? Hindcasting gives the models a soft ride since it tends to lessen any inherent error/bias in the model.
There is a reason why Dr Spencer presented his chart starting at 1979 rather than at 2006 (the date from which the models ran their forward projection). In 2006, the models hindcast backwards to 1979, and forcast through to the end of the century. By setting the plot at 1979 it better reveals the error in the models. Had Dr Spencer centred the plot in 2006, the error would have been attenuated.
These models should not be tested by hindcast, but rather tested by forward cast. The model run should start at 1860 (with all the then known parameters set into the model) and the model should be run through to 2100. We should then examine how well it has reproduced the period 1860 to say 2017.
This would be quite appropriate since the claim is that ENSO is neutral, and that volcanoes lead to short lived cooling.
Forward casting in this manner would better reveal the true extent of error in the model.

richard verney
October 1, 2017 1:50 am

That is to make runs which include only the claimed effect of CO2. Deviations from that ideal model would identify things the models can’t explain, like El Nino/La Nina, volcanos and so on.

I just posted something below without having read your comment.
I agree. After all, Climate Scientists claim that ENSO is neutral. Hence there maybe a very short lived spike (warming or cooling), but overall it ought to have no impact upon the long game of the projection itself.
I have suggested that rather than hindcasting, we should simply forward cast starting at 1860 through to 2100, and then examine what the model says about the period 1860 to 2017 to see what confidence we would place on the model getting 2017 to 2100 correct.

David in Texas
October 1, 2017 8:54 am

“To verify their validity, the climate changes must be hindcasted (rebuilt a posteriori). It is then possible to compare the model results with the observed reality.”
Not so in this case, because the models were “tuned” to reproduce the past. Therefore, they, by mathematical stipulation, resemble the past. The only thing that is being measured is how well the curving fitting (“forced fitting”) worked. Reality has nothing to do with it.
To compare a model to the past, to validate how well it represents reality, one cannot use the past in building the model.

richard verney
October 1, 2017 10:07 am

Of course the models run to warm since Climate Sensitivity programmed into the models is set too high.
In 1971, NASA/GISS published a paper in which Schneider was the lead author which assessed the Climate Sensitivity to CO2 to be about 0.5 degC per doubling, and about 0.6degC per doubling with water vapour feedback.

The paper was written at the time when everything about the basic physics of CO2 was known (its radiative properties, the absorption spectrum etc) and the paper specifically stated that an 8 fold increase in CO2 (ie., 3 doublings) would lead to less than 2 degC of warming and see Figure 1 (see Science Volume 173 at page 138).
It appears that the model that NASA/GISS used was derived/designed by James Hansen, See the penultimate paragraph from a contemporaneous newspaper cutting that suggests this to be the case:
http://3.bp.blogspot.com/-8QovPJSWfuc/U3Le5UrCZEI/AAAAAAAAGA8/8P6OrhvS-bU/s1600/https+++media.proquest.com+media+pq+hnp+doc+144703752+fmt+ai+rep+NONE+ic=1+_s=1sJ2P2mBK1IBW+2FnaveePm2ksSvQ+3D+(1).png
The paper In Science was of course peer reviewed, and nothing new has been discovered since then as to the radiative properties of CO2.
It is just that in the 1970s alarmists wanted to push global cooling as the scare story and hence wished to demonstrate that aerosol emissions dominated the climate. Now they wish to promote global warming and since aerosol emissions are low, they have been forced to ramp up Climate Sensitivity. Without this ramping up, cAGW would be a non starter. GBut there is no new physics since the 1971 paper!

October 1, 2017 11:44 am

There are no climate models.
Only computer games that make grossly inaccurate predictions.
PhD scientists and computer games make the fantasy of runaway global warming seem more believable.
The computer games are a complete waste of taxpayers’ money.
Additional comments here:
http://www.elOnionBloggle.Blogspot.com

October 1, 2017 8:38 pm

GCMs don’t have a chance of credibly predicting climate until they at least input WV as an independent parameter and abandon the absurd assumption that CO2 molecules somehow drive WV molecules into the atmosphere.
After a CO2 molecule absorbs an IR photon it takes about 6 µs for it to emit a photon but it starts bumping in to other molecules, transferring energy and momentum with each contact, within about 0.0002 µs. At low altitude and away from the N & S poles there are about 35 water vapor molecules for each CO2 molecule. Each WV molecule has more than 170 absorb/emit bands at lower energy level (longer wavelength) than CO2 molecules. The energy in EMR absorbed by CO2 near ground level is effectively rerouted up via water vapor. Higher up, as WV dwindles, CO2 participation in EMR rises above insignificant.

Reply to  Dan Pangburn
October 2, 2017 9:49 am

Excellent post. I calculated that in a dry desert atmosphere increase or even doubling CO2 from 380 to 800 ppm would not absorb any more radiation but would increase photon retention time perhaps having a small marginal affect on temperature. Didn’t realise that GCMs ignored wv.

Toneb
October 2, 2017 11:58 am

“Didn’t realise that GCMs ignored wv.”
They don’t …..
https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-3-1.html
” In GCMs, water vapour provides the largest positive radiative feedback (see Section 8.6.2.3): alone, it roughly doubles the warming in response to forcing (such as from greenhouse gas increases). There are also possible stratospheric water vapour feedback effects due to tropical tropopause temperature changes and/or changes in deep convection (see Sections 3.4.2 and 8.6.3.1.1).”

Willy Pete
Reply to  Toneb
October 2, 2017 12:12 pm

They ignore or downplay water vapor effects other than the supposed positive feedback effect from “radiative forcing”, such as clouds and evaporative cooling.
Also, their unwarranted assumption of an at least two-fold positive feedback is not in evidence. The net effect of water vapor is far more likely to act as a net negative feedback.

Reply to  Toneb
October 3, 2017 3:22 am

Do GCMs ignore primary photon absorption by wv?

Reply to  chemengrls
October 3, 2017 3:35 am

..and why isn’t positive feedback from wv causing runaway global warming? Maybe it’s because all of the absorbable primary photons are mopped up by wv leaving not much for CO2 to cause any feedbck except in the desert where there isn’t any wv anyway.

Reply to  chemengrls
October 4, 2017 8:52 am

CHE – More WV means more clouds which limit the temperature increase just like they always have.

Reply to  Toneb
October 4, 2017 9:07 am

WV is determined as a feedback to CO2 level in the GCMs. That is fundamentally wrong and ignores that WV has vapor pressure which depends only on the temperature of the ground level liquid water irrespective of the presence or pressure of any other gas. Any climate analysis needs to “input WV as an independent parameter”.