The Trouble with Global Climate Models

Guest essay by Rud Istvan

The IPCC’s Fifth Assessment Report, Working Group 1 (AR5 WG1) Summary for Policy Makers (SPM) was clear about the associated Coupled Model Intercomparison Project (CMIP5) archive of atmosphere/ocean general circulation models (AOGCMs, hereafter just GCM). CMIP5 results are available via the Royal [Koninklijk] Netherlands Meteorological Institute (KNMI). The SPM said about CMIP5:

§D.1 Climate models have improved since the AR4. Models reproduce observed continental-scale surface temperature patterns and trends over many decades, including the more rapid warming since the mid-20th century and the cooling immediately following large volcanic eruptions (very high confidence).

§D.2 Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing.

 

Neither statement is true, as the now infamous CMIP5/pause divergence proves (illustrated below). CO2 continued to increase; temperature didn’t.

The interesting question is why. One root cause is so fundamentally intractable that one can reasonably ask how the $multibillion climate model ‘industry’ ever sprang up unchallenged. [1]

GCMs are the climate equivalent of engineering’s familiar finite element analysis (FEA) models, used these days to help design nearly everything– from bridges to airplanes to engine components (solving for stress, strain, flexure, heat, fatigue, …)

clip_image002

In engineering FEA, the input parameters are determined with laboratory precision by repeatedly measuring actual materials. Even non-linear ‘unsolvables’ like Navier Stokes fluid dynamics (aircraft air flow and drag modeled using the CFD subset of FEA) are ‘parameter’ verified in wind tunnels (as car/airplane designers actually do with full/scale models).

clip_image004

That is not possible for Earth’s climate.

GCM’s cover the world in stacked grid cells (engineering’s finite elements). Each cell has some set of initial values. Then a change (like IPCC RCP8.5 increasing CO2) is introduced (no different than increased traffic loading increases bridge component stress, or increased aircraft speed increases frictional heating), and the GCM calculates how each cell’s values change over time.[2] The calculations are based on established physics like the Clausius-Clapeyron equation for water vapor, radiative transfer by frequency band (aka the greenhouse effect), or the Navier-Stokes fluid dynamics equations for convection cells.

The CMIP5 archive used up to 30 vertically stacked atmospheric cells, up to 30 stacked ocean cells, and time steps as fine as 30 minutes according to UCAR.

clip_image006

CMIP5 horizontal spatial resolution was typically ~2.5° lat/lon at the equator (about 280 km). The finest CMIP5 horizontal resolution was ~1.1° or about 110km. That limit was imposed by computational constraints. Doubling resolution by halving a grid cell (xy) quadruples the number of cells. It also roughly halves the time step due to the Courant Friedrichs Lewy condition. (Explaining CFL for numerically solved partial differential equations is beyond the scope of this post.) Doubling resolution to a ≈55km grid is ~4 x 2 times as computationally intensive. University Corporation for Atmospheric Research (UCAR) says the GCM rule of thumb for 2x spatial resolution is 10x the computational requirement. One order of magnitude per doubled resolution.

The spatial resolution of modern weather models is necessarily much finer. The newest (installed in 2012) UK MET weather supercomputer and associated models use a rough scale of 25 km (NAE for things like pressure [wind] gradients and frontal boundaries), and a fine UKV scale of 1.5 km for things like precipitation (local flood warnings). As their website proudly portrays:

clip_image008

This is possible because UK Met weather models only simulate the UK region out a few days, not the planet for many decades. Simulating ΔT out to 2100 on the ‘coarse’ 25 km MET weather grid is two orders of magnitude (≈4x4x2x2x[10/8]) beyond present capabilities. Simulating out to 2100 on a 1.5 km resolution to resolve tropical convection cells (and their ‘Eschenbach’ consequences) is (110-55-27-13-7-3-1.5) seven orders of magnitude beyond present computational capabilities. Today’s best supercomputers do a single GCM run in ~2 months (fifty continuous days is typical per UCAR). A single 1.5k run would take 1.4 million years. That is why AR5 WG1 chapter 7 said (concerning clouds at §7.2.1.2):

Cloud formation processes span scales from the submicron scale of cloud condensation nuclei to cloud system scales of up to thousands of kilometres. This range of scales is impossible to resolve with numerical simulations on computers, and is unlikely to become so for decades if ever.”

The fundamentally intractable GCM resolution problem is nicely illustrated by a thunderstorm weather system moving across Arizona. 110×110 cells are the finest resolution computationally feasible in CMIP5. Useless for resolving convection processes.

clip_image010

Essential climate processes like tropical convection cells (thunderstorms) with their associated release of latent heat of evaporation into the upper troposphere where it has an easier time escaping to space, with their associated precipitation removing water vapor and lowering that feedback, simply cannot be simulated by GCMs. Sub-grid cell climate phenomena cannot be simulated from the physics. They have to be parameterized.

And that is a second intractable problem. It is not possible to parameterize correctly without knowing attribution (how much of observed past change is due to GHG, and how much is due to some ‘natural’ variation). IPCC’s AR5/CMIP5 parameter attribution was mainly AGW (per the SPM):

§D.3 This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.

CMIP5 parameterizations were determined in two basic different ways. Since 2002, the DoE has sponsored the CAPT program, which uses multiple short-term comparisons between GCMs modeling a few days (at coarse resolution) to their numerical weather prediction brethren and actual observed weather. The premise is that short term GCM divergence from weather models must be due to faulty parameterization, which the weather models don’t need as much.[3] This works well for ‘fast’ phenomena like a GCM mistakenly splitting the ITCZ into two in two days (the cited paper illustration), but not for ‘slow’ phenomena like changes in upper troposphere humidity or cloud cover with rising CO2 over time.

The second way is to compare longer-term observational data at various time scales to parameterization results, and ‘tune’ the parameters to reproduce the observations over longer time periods. This was the approach taken by the NOAA MAPP CMIP5 Task Force.[4] It is very difficult to tune for factors like change in cloud cover, albedo, SST, or summer Arctic sea ice for which there is little good long-term observational data for comparison. And the tuning still requires assuming some attribution linkage between the process (model), its target phenomenon output (e.g. cloud cover, Arctic ice) and observation.

clip_image012

CMIP5 parameterizations were tuned to hindcast temperature as best possible from 2005 back to about 1975 (the mandatory three decade hindcast), explained by the CMIP5 experimental design itself.[5] This is circumstantially evident from the ‘goodness of fit’.

clip_image014

Assuming mainly anthropogenic attribution means GCM’s were (with pause hindsight) incorrectly parameterized. So they now run hot as assumed away ‘natural’ variation changes toward cooling, like it did from about 1945 to about 1975. This was graphically summarized by Dr. Akasofu, former head of the International Arctic Research Center, in 2010—and ignored by IPCC AR5.[6]

Akasofu’s simple idea also explains why Artic ice is recovering, to the alarm of alarmists. DMI ice maps and Larsen’s 1944 Northwest Passage transit suggest a natural cycle in Arctic ice, with a trough in the 1940s and a peak in the 1970s. Yet Arctic ice extent was not well observed until satellite coverage began in 1979, around a probable natural peak. The entire observational record until 2013 may be just the decline phase of some natural ice variation. The recovery in extent, volume, and multiyear ice since 2012 may be the beginning of a natural 35-year or so ice buildup. But the GCM attribution is quite plainly to AGW.

clip_image016

Almost nobody wants to discuss the fundamentally intractable problem with GCMs. Climate models unfit for purpose would be very off message for those who believe climate science is settled.


References:

[1] According to the congressionally mandated annual FCCE report to Congress, the US alone spent $2.66 billion in 2014 on climate change research. By comparison, the 2014 NOAA NWS budget for weather research was $82 million; only three percent of what was spent on climate change. FUBAR.

[2] What is actually calculated are values at cell corners (nodes), based on the cell’s internals plus the node’s adjacent cells internals.

[3] Philips et. al., Evaluating Parameterizations in General Circulation Models, BAMS 85: 1903-1915 (2004)

[4] NOAA MAPP CMIP5 Task Force white paper, available at cpo.NOAA.gov/sites.cop/MAPP/

[5] Taylor et. al., An Overview of CMIP5 and the Experimental Design, BAMS 93: 485-498 (2012).

[6] Akasofu, On the recovery from the Little Ice Age, Natural Science 2: 1211-1224 (2010).

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
226 Comments
Inline Feedbacks
View all comments
PA
August 9, 2015 9:32 am

http://www.nature.com/nature/journal/v519/n7543/images_article/nature14240-f4.jpg
The CO2 forcing was measured at 0.2 W/m2 for 22 PPM over an 11 year period.
Further – the data show the relationship between CO2 at the surface and IR forcing.
The models clearly aren’t matching the real world data which seems to indicate weak forcing and negative feedback.
How are they doing these long and short term parameter tuning runs, reviewing the results, and still not correcting the models?

Bart
August 9, 2015 9:42 am

“The second way is to compare longer-term observational data at various time scales to parameterization results, and ‘tune’ the parameters to reproduce the observations over longer time periods.”
The problem with this is one of observability. Basically, there is no unique parameterization – many different ones can produce the same observed behavior over the selected interval. One can find a parameterization that appears to fit, but the likelihood is that observations will diverge from the model in the future.

Gary Pearse
Reply to  Bart
August 9, 2015 10:38 am

Indeed, they have “corrected” their fits by over-weighting aerosols so that they can hang on to high climate sensitivity and they desperately cling to it. They’ve even added puffs of smoke from unremarkable volcanoes that don’t emit into the upper atmosphere to try to support the aerosol solution to their woes. They already know that sensitivity IS less than 1.5 but to admit that is to admit there is no crisis in the making.
This science would be entirely different were the norms of morality those of a few generations ago. They survived climategate with obfuscation, whitewash investigations, misdirection and clambering ever louder about Climate Armageddon and found they could essentially get away with murder as far as their supporters were concerned.
They brazened out the ‘pause’ with silly claims of new records being set (one should expect a bump or two on a plateau) and 50 ridiculous reasons for it with the heat going to hidden places. Gleick the Sneick got an award, Turney of the Ship of Fools also got an award after the comedy he was in. Emboldened by the fact that there seemed to be no reckoning to deal with for any crime, they held their noses and eliminated the pause knowing criticism would soon be over and their faithful would happily adopt this new adjustment as scientific. Those with more scruples came down with clinical depressions as they fell into classic psychological D’Nile, although they will probably recover with this evidence that it doesn’t matter what lengths they go to to support their fantasies.

Reply to  Gary Pearse
August 9, 2015 10:51 am

A sensitivity of less than 1.5 deg C has been argued many times here (with sound reasoning and paleo data) to be net positive, rather than net negative, in regards to Earth’s biosphere and the human condition.
That is humanity, through its fossil fuel CO2 injections, is producing a Modern Climate Optimum.
The real risk is though is a Malthusian one of non-renewable resource depletion, such as mineral ores necessary to advanced technical society (copper comes immediately to mind) for which no substitute can be found. But man’ingenuity has always come through against these Malthusian warnings. Such as using robotic space tugs and robotic mining tobpark an iridium-platinum rich asteroid in lunar orbit and mine the ore for earth delivery is one futuristic possibility. That scenario is about as fanciful as getting plentiful oil and natural gas out of dense shale rock would have been to the petroleum industry 50 years ago.

MarkW
Reply to  Gary Pearse
August 9, 2015 3:38 pm

The only copper that is lost, is the stuff that gets sunk with ships. The stuff tossed in land fills is still there, waiting for the day when it becomes economically advantageous to go back in and get it. Wouldn’t surprise me if the amount of copper per ton land fill material is comparable to many currently operating mines already. Plus less smelting to get it ready for market.

rogerknights
Reply to  Gary Pearse
August 9, 2015 8:32 pm

“Gleick the Sneick got an award, Turney of the Ship of Fools also got an award after the comedy he was in.”
So did Loony Lew.

MarkW
Reply to  Bart
August 9, 2015 10:44 am

I believe the author brought this point up, but I would like to re-iterate it here.
There’s parameterization, and then then there’s just making it up.
It’s one thing to parameterize a known process because it’s too hard to do computationally, but then they include things like aerosols.
For most of the period being analyzed, they have no idea what level or types of aerosols were being produced or even where most of them were being produced. They just add in the amount needed to get the model to fit the temperature curve they are training for, and then declare themselves satisfied.

Bart
Reply to  MarkW
August 9, 2015 11:19 am

Again, this is the problem of observability. They do not have any measurements which would uniquely differentiate the effect of aerosols from the host of other influences. So, they can monkey around in that infinitely unobservable subspace, and come up with any answer they please.

August 9, 2015 10:17 am

Rud, Thanks for the tutorial. I had read much of that several years ago from various sources (when I started my self-education process of what GCMs, Climate change stuff, and the claims were all about) but to read, think about those climate computational problems and claims again, and refresh ideas and claims is very useful (medicine calls it CME).
What we see now though is full-on politicized science that has corrupted a message that should have been communicated with lots of uncertainty to the public in the SPM. Unfortunately the CC politicians and renewable crony capitalists have taken over the science message and turned it into voodoo magic potions (of carbon trading taxes, and renewable energy crony capitalists with their taxes & subsidy schemes) to charge the public with those costs while taking away democratic freedoms in order to impose even more taxes down the road. And all on the basis of deeply flawed, circular logic-tuned GCMs.

Mike M. (period)
August 9, 2015 10:40 am

Rud Istvan,
Thank you for a nice summary of what is right and wrong with climate models. A nice contrast to the silly claims one often hears about what climate models assume.
Is there some reason that critical sub-grid scale processes can’t be better dealt with by using adoptive grid sizes? In numerically solving ODE’s adoptive step sizes are old hat. The other possibility for dealing with such phenomena is to develop properly validated parameterizations. You are right that they can’t be validated by comparing to trends. But perhaps comparisons to sufficiently detailed time and space resolved data could do the job. So far as I can tell, validation gets far to little attention from the modellers.
Beyond sub-grid scale phenomena, I suspect another big problem with the models: Inadequate modelling of multidecadal, possible chaotic, processes in the oceans. My guess is that such processes are the origin of multidecadal cycles in the climate, such as in Arctic ice. In principle, GCM’s should be able to deal with such processes, but there is probably nowhere near enough data to guide model development.

Reply to  Mike M. (period)
August 9, 2015 11:29 am

” In principle, GCM’s should be able to deal with such processes, but there is probably nowhere near enough data to guide model development.”

FYI, they don’t try to hide that the GCM don’t model internal dynamics. They pretend the open loop system is representative of the real climate responses.
https://twitter.com/ClimateOfGavin/status/630071181450866688?s=02
And if you cannot model dynamics, you cannot possibly get the feedbacks correct even IF one knows their sign and magnitude. And with their forced positive feedbacks of H2O vapor (net strong positive), they knowingly and willfully force the models to run hot.

MarkW
Reply to  Joel O’Bryan
August 9, 2015 3:39 pm

If you don’t attempt to model dynamics, then by definition any training that you do on past data is already invalid.

Reply to  Mike M. (period)
August 9, 2015 12:24 pm

“Is there some reason that critical sub-grid scale processes can’t be better dealt with by using adoptive grid sizes?”
Yes. As the article says, GCMs work up against a CFL constraint. They can’t decrease spatial grid size without decreasing time step. But there has to be one time step everywhere, for various practical reasons. So you can’t stably reduce horizontal grid size locally where you would like to.
Modelling of multi-decadal processes is the opposite problem. Grid size is not a problem, and it brings no new difficulty of sub-grid processes. It obviously takes a lot of computer time, but that will accumulate. GCM’s are the way to make progress here.

MarkW
Reply to  Nick Stokes
August 9, 2015 3:41 pm

Actually, grid size is just as big a problem for the GCMs. Since you can’t model anything that occurs at levels less than grid size, those things have to be parameterized. However before you can parameterize them, you must first understand them. And we are still years away from being able to do that.

Reply to  Nick Stokes
August 9, 2015 4:00 pm

Nick, with all due respect, modeling essential processes like tropical convention cells is essential, and I showed how it was directly related to grid size resolution. Pictures, even.
Yes, GCMs will progress. Yes, supercomputers will progress–slowly, for reasons given upthread in the technical supercomputer petaflops comment. Way beyond the intent of a simple guest post.
But not enough, fast enough to solve the several orders of magnitude problem highlighted by the guest post. And, my upthread comments on resolving attribution perameterization suggest 30-50 years more ‘good’ data before that knot can be untied. What say you to that?

Reply to  Nick Stokes
August 9, 2015 5:33 pm

It certainly will be much simpler to use fixed and uniform time steps for the entire model, but I very much doubt that this is strictly necessary. Variable time-stepping is done in FEA, and while it would certainly add complexity, it should be possible to use heterogeneous time steps in the same model. For example, at the interface of a small cell and a large one, one could perform calculations on two time scales, and where the two results are within acceptable distance of each other, one can switch to the longer time interval for further outward propagation from the large cell to other large cells.
I would be surprised if this idea had not yet been explored.

PA
Reply to  Nick Stokes
August 9, 2015 6:09 pm

Michael Palmer August 9, 2015 at 5:33 pm
It certainly will be much simpler to use fixed and uniform time steps for the entire model, but I very much doubt that this is strictly necessary.

I’m not that much of an analog guru… but event driven may make more sense than variable time steps.

Reply to  Nick Stokes
August 9, 2015 6:14 pm

Rud,
“Nick, with all due respect, modeling essential processes like tropical convention cells is essential”
Yes, of course (well, updrafts). I was referring to the “opposite problem” of multi-decadal processes, which are slow relative to the timestep.
MP,
“Variable time-stepping is done in FEA”,/i>
Many GCM’s use spectral methods for the dynamical core, for speed and accuracy. I think variable time (or space) intervals would be a big difficulty there.

Reply to  Nick Stokes
August 10, 2015 6:55 am

Well, variable space intervals at least are unavoidable on a spherical surface.
If continuously variable time intervals are too difficult, it may still be feasible to use time steps that are integral multiples of a basic time step. But whatever, fundamentally I think all this stuff is useless. I wouldn’t be surprised if subconsciously the people who program these things feel the same and don’t really try all that hard to improve them. The Harry-readme file comes to mind.

August 9, 2015 10:46 am

Off topic, I apologize; but can someone address rumors I’ve been hearing about a net loss in global land ice, which has not been ofset by a net gain in sea ice… I’m having difficulty breaking this down and now I’m forced to turn to you lot for a hand up

Sturgis Hooper
Reply to  owenvsthegenius
August 9, 2015 11:44 am

Owen,
The Antarctic is gaining ice mass, not losing it. Since this is most of the ice and freshwater on earth, loss would have to be extreme elsewhere for there to be a net loss.
http://wattsupwiththat.com/2012/09/10/icesat-data-shows-mass-gains-of-the-antarctic-ice-sheet-exceed-losses/
The Greenland ice sheet may or may not be losing mass, but in any case, not much either way. The mass of montane glaciers is negligible compared to the ice sheets, and likewise hard to say whether now a net gainer or loser.
Obviously there is a bit less ice on the planet now than during the depths of the Little Ice Age 300 years ago. The massive East Antarctic Ice Sheet, with 61% of earth’s freshwater, quit receding over 3000 years ago, during the Minoan Warm Period. The longer term trend for earth’s climate is cooling, which is a bad thing.

Reply to  owenvsthegenius
August 9, 2015 11:47 am

Yes, you heard correctly. The North American Larentian land ice shield melted, retreated, and dissappeared starting aroun 20,000 yrs ago and was mostly gone by 12,500 years ago. Since then some smaller high alpine mountain glaciers that survived that wicked Climate Change have been slowly retreating as well, with occasional melt hiatuses occurring such as during the Little Ice from 1450-1850 AD.

AndyG55
Reply to  Joel O’Bryan
August 9, 2015 3:15 pm

Arctic sea ice is actually at anomalously high levels compared to all but the last few hundred years of the Holocene.
Biomarkers in sediment clearly show that an open Arctic was the norm for most of the first 2/3-3/4 of the last 10,000 years.
All this politically scary melt is because we have just climbed out of the coldest period of the current interglacial. Why would things not melt ! Arctic sea ice is a pita for anyone living up there.
Unfortunately, it looks like we might have topped out ! 🙁

AndyG55
Reply to  Joel O’Bryan
August 9, 2015 3:17 pm

whoops missed an important couple of words.
Biomarkers in sediment clearly show that an open Arctic was the norm during summer

Reply to  owenvsthegenius
August 9, 2015 12:08 pm

Owen, you will find a wealth of reference resources in essays PseudoPrecision (sea level rise, indicative of land ice loss), Tipping Points (detailed analysis of Greenland and Antarctice ice sheets), and Northwest Passage (Arctic sea ice cyclicality and measurement issues) in ebook Blowing Smoke available iBooks, Kindle,…
Short summary. Greenland was near stable in the 1990’s, lost about 200GT/year through 2012, and has again apparently stabilized. See NOAA YE 2014 report card and DMI surface mass balance for 2015 comparison to yhe peak loss year 2012.. EAIS is stable/gaining. WAIS is losing, mainly from PIG in the Amundsen embayment. Estimates vary by nearly a factor of 4 from 2011 to 2014, so the data are sketchy. Arctic sea ice is recovering significantly (especially multiyear ice) from the 2012 low (see extensive comment subthread above). Antarctic sea is setting record highs for now the fourth straight year.

PA
Reply to  owenvsthegenius
August 9, 2015 7:33 pm

1. Archimedes principle says sea ice, ice shelves, and a significant fraction of West Antarctica is irrelevant except for water cooler gossip. Only land ice above sea level counts (minus about 9% of the amount below sea level). SInce there will be isostatic rebound – even less ice counts toward sea level depending on how long it takes to melt.
2. The net ice sheet loss in Antarctica is 0-100 GT on average depending on what guesser you believe.
The average thickness of the Antarctic ice sheet is 7000 feet. There are 30 Million GT of ice. The average annual change assuming 100 GT loss is 1/300,000th (that also means it will take 300,000 years to melt). 1/300,000th of 7000 feet is 0.28 inch or less. So they are trying to measure an average change of about 1/4 inch from 6700 km (GRACE) or about 590 km from ICESAT (ICESAT died February 2010, and ICESAT 2 doesn’t go up until 2017).

Schrodinger's Cat
August 9, 2015 10:50 am

A good post. I suspect your final sentence: “Climate models unfit for purpose would be very off message for those who believe climate science is settled.” is the most important sentence in the whole post. It is also where climate modellers diverge from the rest of the scientific community.
Failed models have no scientific use and should be binned. They should never be used for policy making.
The amazing resilience of failed climate models suggests that they are not about science. They fulfil their political and financial purposes and that is why they are used for policymaking.

Berényi Péter
August 9, 2015 11:12 am

One root cause is so fundamentally intractable that one can reasonably ask how the $multibillion climate model ‘industry’ ever sprang up unchallenged

Istvan, the mystery is even deeper than that. There may be unknown physics related to entropy processes in chaotic nonequilibrium thermodynamic systems, of which class the terrestrial climate system belongs to, see my comment on Validation Of A Climate Model Is Mandatory.
Jaynes entropy of these systems can’t even be defined, so one obviously needs some as yet lacking generalization of the concept. Because measuring entropy production of the entire system is still conceptually simple: one only has to count incoming vs. outgoing photons (and some contribution from their different momentum distributions), because entropy carried by a single photon is independent of frequency in a photon gas. And the only coupling between the system and its (cosmic) environment is radiative one.
The climate system is nothing but a (huge) heat engine and trying to construct a computational model of such an engine with no understanding of the underlying entropy process is futile.
The funny thing is, while the climate system is obviously too big to fit into the lab, we could still make experimental setups of nonequilibrium thermodynamic systems with chaotic dynamics under lab conditions and study them experimentally. In that case we could have as many experimental runs as necessary while having full control over all system parameters. Unfortunately no one seems to do such experimental work, in spite of the fact that’s the way basic physics was always developed, besides, it could be done at a fraction of the cost of these pointless, oversized computer games.
We would have no electric industry without Maxwell’s breakthrough and Faraday’s monumental experimental work serving as its foundation.

Joe Crawford
Reply to  Berényi Péter
August 9, 2015 2:18 pm

I’m not sure the current crop of those who call themselves “Climate Scientists” have the knowledge to design such experiments or the math skills to analyze the results if some one else did the designs for them.

August 9, 2015 11:14 am

To contribute to this enlightening article by Rud Istvan, I would like to recommend this excellent article:
Tuning the climate of a global model. Mauritsen et al. JOURNAL OF ADVANCES IN MODELING EARTH SYSTEMS, VOL. 4, M00A01. doi:10.1029/2012MS000154, 2012.

The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters.

http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/pdf

Reply to  Javier
August 9, 2015 2:43 pm

Javier, thank you for extending my post this way. Outstanding reference contribution.. I used the paper’s figure 1 in essay Models all the way down. Dr. Curry used it in her most recent congressional testimony. Is an ‘insider’s view’ of the NOAA MAPP second type parameterization tuning. In this case, the Max Planck Institute’s Echam6 GCM for CMIP5.

JohnWho
August 9, 2015 12:31 pm

Question:
Isn’t the biggest “Trouble with Global Climate Models” the acknowledged fact that the models do not (can not?) include all of the known factors that have an effect on the climate?
In the simplest of terms, isn’t a model, any model, in order to reflect the reality being modeled, required to include all known factors?

August 9, 2015 12:38 pm

Rud Istvan says:
” ….SPM said about CMIP5: §D.1 Climate models have improved since the AR4. Models reproduce observed continental-scale surface temperature patterns and trends over many decades, including the more rapid warming since the mid-20th century and the cooling immediately following large volcanic eruptions (very high confidence).
§D.2 Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing.
Neither statement is true, as the now infamous CMIP5/pause divergence proves (illustrated below). CO2 continued to increase; temperature didn’t. The interesting question is why…. ”
Interesting question indeed! One answer could be that the Summary for Policy Makers (SPM) is deliberately falsified for political purposes anyway and is not expected to be accurate. Another aspect would be that these model makers have no idea what physical observables to use in their models. If you know for example, that carbon dioxide does not warm up the world you should not use it as an observable that controls your output. It is not a secret, for example, that during the present hiatus carbon dioxidse is increasing but temperature is not. That should be enough for you to dump those CO2 surface forcing and other telated aspects from your models, One other aspect of those models is trying to represent the entire climate story as part of a single, mathematically calculated curve. That is plain stupid. There are breakpoints in the real temperature curve where the drivers change and the curve makes unexpected turns. One of them was early 1940 that none of them even try to get. Another one was the beginning of the the twenty=first century. And still another one was the beginning of the hiatus of the eighties and nineties they so hated that they covered it up with fake warming. The two hiatuses – the present one and and the previous one I referred to – are totally outside their experience because they stubbornly resist using the correct greemhouse theory to analyze them. That theory is called MGT or Miskolczi greenhouse theory and has been available since 2007. They bkacklisted that because they did not like its predictions and grad students never even knew that it even existed. Its prediction is very simple: addition of carbon dioxide to the atmosphere does not cause greenhousecwareming. That is what we have actually observed for 18 years. During every one of these years the Arrhenius greenhouse theory predicted warming but nothing happened. A scientific theoryn that makes wrong predictions belongs in the waste bnasket of history, and tha is where Arrjenius belongs. MGT differs from Arrhenius in being able to handle several greenhouse gases, such as the mix in our atmosphere, at the same time while Arrhenius is limited to one – CO2. According to MGT carbon dioxide and water vapor in the atmosphere form a joint optimum absorption window in the infrared whose optical thickness is 1.87. This value was obtined by analyzing radiosonde data. If you now add carbon dioxide to air it will start to absorb just as Arrhenius predicted. But this will increase the optical thickness. And as soon as this happens, water vapor will start to diminish, rain out, and the original optical thickness is restored. The added carbon dioxide will of course continue absorbing but the reduction of water vapor will have reduced the total absorption enough to block any warming that Arrhenius predicts. MLT prediction is then that addition of carbon dioxide to air does not cause warming, precisely as we have observed for the last 18 years. The hiatus of the eighties and nineties also lasted 18 years and jointly the two hiatuses block out greenhouse warming from 80 percent of the ime since 1979, the beginning year of the satellite era. The remaining 20 percent consists of the 1998 super El Nino and a short warming that followed it. Heither one has any greenhouse connections. Hence, we can declare the entire satellite era since 1979 as greenjouse free. You figure bout what happened before 1979.

Scott
August 9, 2015 12:57 pm

And of course, this assume the data HAS NOT BEEN MANIPULATED.
Let’s see what revelations come upon us over the next few year regarding this potential factoid?

Chris Hanley
August 9, 2015 1:55 pm

“Akasofu’s simple idea also explains why Ar[c]tic ice is recovering, to the alarm of alarmists. DMI ice maps and Larsen’s 1944 Northwest Passage transit suggest a natural cycle in Arctic ice, with a trough in the 1940s and a peak in the 1970s. Yet Arctic ice extent was not well observed until satellite coverage began in 1979, around a probable natural peak …”.
========================
That can (I think) be inferred from the temperature record:
http://www.climate4you.com/images/70-90N%20MonthlyAnomaly%20Since1920.gif

AndyG55
Reply to  Chris Hanley
August 9, 2015 3:20 pm

Biomarkers clearly show that summer Arctic sea ice was not a thing of the past.
The first 2/3+ of the Holocene probably had an open, ice free, Arctic during a reasonable part of the year.

August 9, 2015 3:46 pm

A much too unnecessary complicated speculation about why climate models fail. They fail because CO2 does not affect temperature. End of story.
There are no “well established physics principles” in any calculation of CO2’s imaginary ability to “trap heat”.
The temperatures on Venus at equal pressures to Earth’s troposphere EXACTLY what they should be and can be calculated using nothing more than their relative distances to the sun. The FACT that this can be done completly and utterly falsifies the Greenhouse Effect. Anyone who claims otherwise after checking this FACT, can no longer claim to be a scientist. You are for ever afterwards one of the following three things: Ideologically blinded, stupid or corrupt!

Reply to  wickedwenchfan
August 9, 2015 4:06 pm

With all due respect, I think you are wrong. Read my book. Listen to AW and JC. Such a denier extremist (and scientifically proven wrong stance) weakens the skeptical arguement. Same as Inhofe. Please stop doing that. Please.

Reply to  wickedwenchfan
August 9, 2015 6:32 pm

wicked,
I think you will find that it is usually the warmists that claim CO2 and H2O “trap heat”. The actual function of so called greenhouse gases in the lower troposphere is to enhance convection by intercepting outgoing long wave radiation from the earth’s surface. This enhancement mechanism increases the energy flow to the upper troposphere where GHG’s, primarily CO2, can radiate directly to space. Increased CO2 concentration raises the altitude of the characteristic emission layer. The elevation of the characteristic emissions layer is more a function of total pressure, not of CO2 partial pressure. The slight raising of the elevation results in a slightly higher surface temperature due to the lapse rate structure of the troposphere. So increased CO2 concentration does result in a higher but miniscule to unmeasurable increase in the earth’s surface temperature. Yes, it is telling that the temperatures and lapse rate structure of the Venusian atmosphere from one bar up is so earth like.

August 9, 2015 5:50 pm

” incorrectly parameterized”
I think the problem is more fundamental than poor parameterization, it’s conservation of water vaper at the surface, an innocent sounding phrase, with big ramifications. If I remember it correctly, it’s how they turned GCM’S from running cold to explaining the warming of the 80’s and 90’s.
Basically they don’t limit water vapor to 100% humidity at the surface, GCM’S are allowed to exceed the natural limiting factor for water vapor. This is how they get water vapor positive feedback.
They allow this, because otherwise they couldn’t explain the surface temperature when the PDO (?) led to natural warming.

August 9, 2015 7:09 pm

MikeB:

A blackbody absorbs ALL radiation falling on it!!!!!! You could say that is part of the definition of a blackbody. Electromagnetic radiation transports energy. When radiation is absorbed the energy it carries is also absorbed. The blackbody will absorb it all, regardless of where it came from – BY DEFINITION.

Physics is about REALITY, not DEFINITION. The definition of a black body may well be that it absorbs ALL radiation falling on it!!!!!! But reality is made of real bodies, not idealised objects acting according to some DEFINITION. They don’t do what the DEFINITION of a black body says, they do what real materials do according to the laws of physics. And that’s the real laws of physics, which might not be quite the same as we think they are – but that’s another story.

August 9, 2015 7:22 pm

Rud, it must be possible to run the programmes with different parameters.
In view of skepticism it must be the case that such runs have been done.
The fact that none have been released or talked about is proof that the models do work with different parameters, ie inputs that result in lowered climate sensitivity as an output.
Any chance that someone could leak one of those trials.
Mosher perhaps, he would know.
Or Zeke.
O course if the climate models did work better it would not change the substance of your post that the potential changes belie longterm prediction.
It seems an idea of programming in changes as they occur to modify the parameters plus use of Paleo data limits [we have had relative isothermality for 2 billion years] could put brakes on excessive prediction yet allow [a] more meaningful climate model[s] to develop.

Reply to  Chris4321
August 9, 2015 8:30 pm

Has been done. See Javier’s excellent reference. Problem in that paper’s ECS conclusion include using a single slab shallow ocean coupled model– to save computations- to full CMIP5 ECHAM6.
Apples to oranges is not valid science.

Reply to  Chris4321
August 10, 2015 6:19 am

“Rud, it must be possible to run the programmes with different parameters.
In view of skepticism it must be the case that such runs have been done.”
start here
https://www.newton.ac.uk/event/clp/seminars

MfK
August 9, 2015 8:19 pm

Great post, it brings new light onto why GCMs are not predictive tools. In my humble opinion, they never can be. The system is too complex to model with any kind of computer now or in the future. And it is provably too complex to make any kind of predictions on which to base civilization-killing policies.

August 9, 2015 8:52 pm

Attributing CO2 with influence on climate is proven to be wrong.
There has always been plenty of CO2 in the atmosphere. Without it, life as we know it could have never evolved. If CO2 was a forcing, it would cause temperature change according to the time-integral of the CO2 level (or the time-integral of a function of the CO2 level). The only way that this time-integral could consistently participate in the ‘measured’ (proxy estimate) average global temperature for at least the last 500 million years is if the EFFECT OF CO2 ON AVERAGE GLOBAL TEMPERATURE IS ZERO and the temperature change resulted from other factors.
Variations of this proof and identification of what does cause climate change are at http://agwunveiled.blogspot.com Only one input is needed or used and it is publicly available. The match is better than 97% since before 1900.

AntonyIndia
August 9, 2015 9:42 pm

GCM’s cover the world in stacked grid cells (engineering’s finite elements). Right. When these huge cells were reduced in size over the Karakoram mountains they could suddenly explain why those glaciers were gaining ice, not melting. How can we believe all other low resolution cell results now? http://www.princeton.edu/main/news/archive/S41/39/84Q12/index.xml?section=topstories

August 9, 2015 10:56 pm

I noticed on the chart comparing CMIP5 with observations that the wide light colored CMIP5 band is 5% to 95% confidence. As I recall from school (ancient history), that’s a 90% confidence interval, not 95%.

richardscourtney
August 10, 2015 12:53 am

Rud Istvan:
Thankyou for a nice article. It is sad that much of the ensuing thread has been trolled from your article and onto SK nonsensical ‘physics’. Your article deserves better.
I write to draw attention to a pettifogging nit-pick that you may want to clarify because warmunists exaggerate the importance of such trivia as a method to dismiss articles they cannot dispute.
You say

GCMs are the climate equivalent of engineering’s familiar finite element analysis (FEA) models, used these days to help design nearly everything– from bridges to airplanes to engine components (solving for stress, strain, flexure, heat, fatigue, …)

I know what you mean by that (and have said similar myself) but GCMs use finite difference analysis (FDA) and not FEA.
Richard

Alx
Reply to  richardscourtney
August 10, 2015 8:58 am

Yes quote mining quotes from comment sections is the last refuge of scoundrels who lack both a cohesive counter argument to a position and intellectual integrity.

richardscourtney
Reply to  Alx
August 10, 2015 10:22 am

Alx
You say

Yes quote mining quotes from comment sections is the last refuge of scoundrels who lack both a cohesive counter argument to a position and intellectual integrity.

So, Alx, don’t do it unless you want to demonstrate that you are a scoundrel who lacks both a cohesive counter argument to a position and intellectual integrity.
Richard

August 10, 2015 3:45 am

“Even non-linear ‘unsolvables’ like Navier Stokes fluid dynamics (aircraft air flow and drag modeled using the CFD subset of FEA) are ‘parameter’ verified in wind tunnels (as car/airplane designers actually do with full/scale models).”
The Caterham Formula one team decided that finite element analysis was all they needed and eschewed the cost of a wind uinnel.
They were the worst team on the grid, and have now gone.

Reply to  Leo Smith
August 10, 2015 7:39 am

The Caterham Formula one team decided that finite element analysis was all they needed and eschewed the cost of a wind uinnel.
They were the worst team on the grid, and have now gone.

Conversely, the Car race simulators with a driver can be within a fraction of a second of a real race lap.
I think this is a good example of the problem with simulators, you can get lost, not really understand the question you asked, and not really understand what the simulator is really telling you.

Editor
August 10, 2015 4:33 am

Thanks for a good discussion of the nuts and bolts of computer modeling. I enjoyed reading it.

August 10, 2015 7:13 am

The pole ice cap change interdependently of the earths “climate” because they are partially controlled by a solar wind interaction…
“High-latitude plasma convection from Cluster EDI: variances and solar wind correlations”
“The magnitude of convection standard deviations is of the same order as, or even larger than, the convection magnitude itself. Positive correlations of polar cap activity are found with |ByzIMF| and with Er,sw, in particular. The strict linear increase for small magnitudes of Er,sw starts to deviate toward a flattened increase above about 2 mV/m. There is also a weak positive correlation with Pdyn. At very small values of Pdyn, a secondary maximum appears, which is even more pronounced for the correlation with solar wind proton density. Evidence for enhanced nightside convection during high nightside activity is presented.”
‘Low to Moderate values in the solar wind electric field are positively correlated to convection velocity.”
“A positive correlation between Ring current and convection velocity.”
http://web.ift.uib.no/Romfysikk/RESEARCH/PAPERS/forster07.pdf
Low Energy ion escape from terrestrial Polar Regions.
http://www.dissertations.se/dissertation/3278324ef7/

Gary Pearse
August 10, 2015 10:04 am

joelobryan
August 9, 2015 at 10:51 am
Malthusian ladies and gents. We currently mine 20Mtpy of copper, the total amount used (and still on the surface for reuse as Joel points out is the 500M tonnes that has been mined from antiquity to present) and the recent US Geological Survey paper estimates 3,500M tonnes yet to be developed. we have ~7.2B people on earth heading for a peak of 8.8 to 10 (an even lower number if we accelerated the growth of prosperity for Africa and other poor regions). We have lots. We have substitutes, We have miniaturized (a computer in the 1960s that took up a sizable room with a tiny fraction of the computing power of one today that weighs less than a kilogram).
I promulgated a law recently after a review of the all the human made global disasters that have been predicted without one success that states: There is no possibility or even capability of humankind causing a disaster of global proportions. All disasters tend to be quick, local, painful and then everything heals up and evidence of the disaster itself is almost totally erased. Even a year after the Hiroshima bombing, certainly a stark horrible disaster in terms of human life, radiation levels had declined to back ground. The Chernobyl disaster resulted in a no-go zone that now is forested and full of wild animals:
“..Within a decade or so, it was noticed that roe deer, fox, moose, bears, feral pigs, lynx, and hundreds of species of birds were in the area, many seeming to thrive. Soon there were reports of an animal feared in Russian folklore, the wolf……(some mutations were reported)….Perhaps one reason why mutations are not obvious in the larger animals is because the wolves weed out the deformed as well as the weak.”
http://www.thewildlifenews.com/2012/12/31/chernobyl-wildlife/
http://zidbits.com/2013/11/is-nagasaki-and-hiroshima-still-radioactive/
My law: It is AXIOMATIC THAT PREDICTIONS FROM DOOMSTERS HAVE NOT AND, I WOULD SAY CANNOT COME TRUE because of their missing of the overpowering dynamic human ingenuity factor in their thinking. Unconstrained by this first order principal component, their thoughts (and heartfelt concerns) soar through the roof of reality.