Guest essay by Rud Istvan
The IPCC’s Fifth Assessment Report, Working Group 1 (AR5 WG1) Summary for Policy Makers (SPM) was clear about the associated Coupled Model Intercomparison Project (CMIP5) archive of atmosphere/ocean general circulation models (AOGCMs, hereafter just GCM). CMIP5 results are available via the Royal [Koninklijk] Netherlands Meteorological Institute (KNMI). The SPM said about CMIP5:
§D.1 Climate models have improved since the AR4. Models reproduce observed continental-scale surface temperature patterns and trends over many decades, including the more rapid warming since the mid-20th century and the cooling immediately following large volcanic eruptions (very high confidence).
§D.2 Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing.
Neither statement is true, as the now infamous CMIP5/pause divergence proves (illustrated below). CO2 continued to increase; temperature didn’t.
The interesting question is why. One root cause is so fundamentally intractable that one can reasonably ask how the $multibillion climate model ‘industry’ ever sprang up unchallenged. [1]
GCMs are the climate equivalent of engineering’s familiar finite element analysis (FEA) models, used these days to help design nearly everything– from bridges to airplanes to engine components (solving for stress, strain, flexure, heat, fatigue, …)
![]()
In engineering FEA, the input parameters are determined with laboratory precision by repeatedly measuring actual materials. Even non-linear ‘unsolvables’ like Navier Stokes fluid dynamics (aircraft air flow and drag modeled using the CFD subset of FEA) are ‘parameter’ verified in wind tunnels (as car/airplane designers actually do with full/scale models).
![]()
That is not possible for Earth’s climate.
GCM’s cover the world in stacked grid cells (engineering’s finite elements). Each cell has some set of initial values. Then a change (like IPCC RCP8.5 increasing CO2) is introduced (no different than increased traffic loading increases bridge component stress, or increased aircraft speed increases frictional heating), and the GCM calculates how each cell’s values change over time.[2] The calculations are based on established physics like the Clausius-Clapeyron equation for water vapor, radiative transfer by frequency band (aka the greenhouse effect), or the Navier-Stokes fluid dynamics equations for convection cells.
The CMIP5 archive used up to 30 vertically stacked atmospheric cells, up to 30 stacked ocean cells, and time steps as fine as 30 minutes according to UCAR.
![]()
CMIP5 horizontal spatial resolution was typically ~2.5° lat/lon at the equator (about 280 km). The finest CMIP5 horizontal resolution was ~1.1° or about 110km. That limit was imposed by computational constraints. Doubling resolution by halving a grid cell (xy) quadruples the number of cells. It also roughly halves the time step due to the Courant Friedrichs Lewy condition. (Explaining CFL for numerically solved partial differential equations is beyond the scope of this post.) Doubling resolution to a ≈55km grid is ~4 x 2 times as computationally intensive. University Corporation for Atmospheric Research (UCAR) says the GCM rule of thumb for 2x spatial resolution is 10x the computational requirement. One order of magnitude per doubled resolution.
The spatial resolution of modern weather models is necessarily much finer. The newest (installed in 2012) UK MET weather supercomputer and associated models use a rough scale of 25 km (NAE for things like pressure [wind] gradients and frontal boundaries), and a fine UKV scale of 1.5 km for things like precipitation (local flood warnings). As their website proudly portrays:
![]()
This is possible because UK Met weather models only simulate the UK region out a few days, not the planet for many decades. Simulating ΔT out to 2100 on the ‘coarse’ 25 km MET weather grid is two orders of magnitude (≈4x4x2x2x[10/8]) beyond present capabilities. Simulating out to 2100 on a 1.5 km resolution to resolve tropical convection cells (and their ‘Eschenbach’ consequences) is (110-55-27-13-7-3-1.5) seven orders of magnitude beyond present computational capabilities. Today’s best supercomputers do a single GCM run in ~2 months (fifty continuous days is typical per UCAR). A single 1.5k run would take 1.4 million years. That is why AR5 WG1 chapter 7 said (concerning clouds at §7.2.1.2):
“Cloud formation processes span scales from the submicron scale of cloud condensation nuclei to cloud system scales of up to thousands of kilometres. This range of scales is impossible to resolve with numerical simulations on computers, and is unlikely to become so for decades if ever.”
The fundamentally intractable GCM resolution problem is nicely illustrated by a thunderstorm weather system moving across Arizona. 110×110 cells are the finest resolution computationally feasible in CMIP5. Useless for resolving convection processes.
![]()
Essential climate processes like tropical convection cells (thunderstorms) with their associated release of latent heat of evaporation into the upper troposphere where it has an easier time escaping to space, with their associated precipitation removing water vapor and lowering that feedback, simply cannot be simulated by GCMs. Sub-grid cell climate phenomena cannot be simulated from the physics. They have to be parameterized.
And that is a second intractable problem. It is not possible to parameterize correctly without knowing attribution (how much of observed past change is due to GHG, and how much is due to some ‘natural’ variation). IPCC’s AR5/CMIP5 parameter attribution was mainly AGW (per the SPM):
§D.3 This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.
CMIP5 parameterizations were determined in two basic different ways. Since 2002, the DoE has sponsored the CAPT program, which uses multiple short-term comparisons between GCMs modeling a few days (at coarse resolution) to their numerical weather prediction brethren and actual observed weather. The premise is that short term GCM divergence from weather models must be due to faulty parameterization, which the weather models don’t need as much.[3] This works well for ‘fast’ phenomena like a GCM mistakenly splitting the ITCZ into two in two days (the cited paper illustration), but not for ‘slow’ phenomena like changes in upper troposphere humidity or cloud cover with rising CO2 over time.
The second way is to compare longer-term observational data at various time scales to parameterization results, and ‘tune’ the parameters to reproduce the observations over longer time periods. This was the approach taken by the NOAA MAPP CMIP5 Task Force.[4] It is very difficult to tune for factors like change in cloud cover, albedo, SST, or summer Arctic sea ice for which there is little good long-term observational data for comparison. And the tuning still requires assuming some attribution linkage between the process (model), its target phenomenon output (e.g. cloud cover, Arctic ice) and observation.
![]()
CMIP5 parameterizations were tuned to hindcast temperature as best possible from 2005 back to about 1975 (the mandatory three decade hindcast), explained by the CMIP5 experimental design itself.[5] This is circumstantially evident from the ‘goodness of fit’.
![]()
Assuming mainly anthropogenic attribution means GCM’s were (with pause hindsight) incorrectly parameterized. So they now run hot as assumed away ‘natural’ variation changes toward cooling, like it did from about 1945 to about 1975. This was graphically summarized by Dr. Akasofu, former head of the International Arctic Research Center, in 2010—and ignored by IPCC AR5.[6]
Akasofu’s simple idea also explains why Artic ice is recovering, to the alarm of alarmists. DMI ice maps and Larsen’s 1944 Northwest Passage transit suggest a natural cycle in Arctic ice, with a trough in the 1940s and a peak in the 1970s. Yet Arctic ice extent was not well observed until satellite coverage began in 1979, around a probable natural peak. The entire observational record until 2013 may be just the decline phase of some natural ice variation. The recovery in extent, volume, and multiyear ice since 2012 may be the beginning of a natural 35-year or so ice buildup. But the GCM attribution is quite plainly to AGW.
![]()
Almost nobody wants to discuss the fundamentally intractable problem with GCMs. Climate models unfit for purpose would be very off message for those who believe climate science is settled.
References:
[1] According to the congressionally mandated annual FCCE report to Congress, the US alone spent $2.66 billion in 2014 on climate change research. By comparison, the 2014 NOAA NWS budget for weather research was $82 million; only three percent of what was spent on climate change. FUBAR.
[2] What is actually calculated are values at cell corners (nodes), based on the cell’s internals plus the node’s adjacent cells internals.
[3] Philips et. al., Evaluating Parameterizations in General Circulation Models, BAMS 85: 1903-1915 (2004)
[4] NOAA MAPP CMIP5 Task Force white paper, available at cpo.NOAA.gov/sites.cop/MAPP/
[5] Taylor et. al., An Overview of CMIP5 and the Experimental Design, BAMS 93: 485-498 (2012).
[6] Akasofu, On the recovery from the Little Ice Age, Natural Science 2: 1211-1224 (2010).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
My understanding is that the climate models are multi-variable linear regression models similar to financial models. Am I wrong? How does the process discussed in this article relate to time series models?
The opinion a real physicist about
the trouble with “climate models”
What is undeniable is the world was told, before the December 2009 Copenhagen Climate Conference, that the IPCC AR4 was “the gold standard in climate science” and was often referred to as “the settled science”.
That report was rendered obsolete by Mother Nature when all those rising temperature trends, based on various rising CO2 emission scenarios, were shown to be wrong. That being the case, everything else in AR4 relating to those trends automatically became irrelevant. AR4 failed to consider a flat temperature trend despite rising CO2 emissions. It was therefore not worth the paper it was written on.
Nothing coming out of the IPCC should now be trusted. The trust ceased with AR4. It is a political body and not a scientific one. And there lies the problem.
Rud Istavan: Thank you for a very informative post. I think your post has much the same point about models as this talk by Christopher Essex:
I think readers who have read this post would enjoy the Essex talk.