From the INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES and the “all models are wrong, some might be useful” department.
Researchers work toward systematic assessment of climate models
A research team based at the Pacific Northwest National Laboratory in Richland, Wash., has published the results of an international survey designed to assess the relative importance climate scientists assign to variables when analyzing a climate model’s ability to simulate real-world climate.
The results, which have serious implications for studies using the models, were published as a cover article in Advances in Atmospheric Sciences on June 22, 2018.

Advances in Atmospheric Sciences
“Climate modelers spend a lot of effort on calibrating certain model parameters to find a model version that does a credible job of simulating the Earth’s observed climate,” said Susannah Burrows, first author on the paper and a scientist at the Pacific Northwest National Laboratory who specializes in Earth systems analysis and modeling.
However, Burrows noted, there is little systematic study on how experts prioritize such variables as cloud cover or sea ice when judging the performance of climate models.
“Different people might come to slightly different assessments of how ‘good’ a particular model is, depending to large extent on which aspects they assign the most importance to,” Burrows said.
One model, for example, may better simulate sea ice while another model excels in cloud simulation. Each scientist must strike a balance between their competing priorities and goals–a difficult thing to capture systematically in data analysis tools.
“In other words, there isn’t a single, completely objective definition of what makes a ‘good’ climate model, and this fact is an obstacle to developing more systematic approaches and tools to assist in model evaluations and comparisons,” Burrows said.
The researchers found, from a survey of 96 participants representing the climate modelling community, that experts took specific scientific objectives into consideration when rating variable importance. They found a high degree of consensus that certain variables are important in certain studies, such as rainfall and evaporation in the assessment of the Amazonian water cycle. That agreement falters on other variables, such as how important it is to accurately simulate surface winds when studying the water cycle in Asia.
Understanding these discrepancies and developing more systematic approaches to model assessment is important, according to Burrows, since each new version of a climate model must undergo significant evaluation, and calibration by multiple developers and users. The labor-intensive process can take more than a year.
The tuning, while designed to maintain a rigorous standard, requires experts to make trade-offs between competing priorities. A model may be calibrated at the expense of one scientific objective in order to achieve another.
Burrows is a member of an interdisciplinary research team at PNNL working to develop a more systematic solution to this assessment problem. The team includes Aritra Dasgupta, Lisa Bramer, and Sarah Reehl, experts in data science and visualization, and Yun Qian, Po-Lun Ma, and Phil Rasch, climate science experts.
To help climate modelers understand these trade-offs more clearly and efficiently, the visualization researchers are building interactive, intuitive visual interfaces that allow modelers to summarize and explore complex information about different aspects of model performance.
The data scientists are working to characterize expert climate model assessment in greater detail, building on the findings from the initial survey. Eventually, the researchers aim to blend a combination of metrics with human expertise to assess how well-suited climate models are for specific science objectives, as well as to predict how frequently experts will agree or disagree with that assessment.
“[We plan] to combine the best of both worlds, using computing to reduce manual effort and allowing scientists to more efficiently apply their human insight and judgment where it is most needed,” Burrows said.
###
Here is the paper: https://link.springer.com/article/10.1007%2Fs00376-018-7300-x
Characterizing the Relative Importance Assigned to Physical Variables by Climate Scientists when Assessing Atmospheric Climate Model Fidelity
Abstract
Evaluating a climate model’s fidelity (ability to simulate observed climate) is a critical step in establishing confidence in the model’s suitability for future climate projections, and in tuning climate model parameters. Model developers use their judgement in determining which trade-offs between different aspects of model fidelity are acceptable. However, little is known about the degree of consensus in these evaluations, and whether experts use the same criteria when different scientific objectives are defined. Here, we report on results from a broad community survey studying expert assessments of the relative importance of different output variables when evaluating a global atmospheric model’s mean climate. We find that experts adjust their ratings of variable importance in response to the scientific objective, for instance, scientists rate surface wind stress as significantly more important for Southern Ocean climate than for the water cycle in the Asian watershed. There is greater consensus on the importance of certain variables (e.g., shortwave cloud forcing) than others (e.g., aerosol optical depth). We find few differences in expert consensus between respondents with greater or less climate modeling experience, and no statistically significant differences between the responses of climate model developers and users. The concise variable lists and community ratings reported here provide baseline descriptive data on current expert understanding of certain aspects of model evaluation, and can serve as a starting point for further investigation, as well as developing more sophisticated evaluation and scoring criteria with respect to specific scientific objectives.
Modellers using “physics” leave out a major damping effect on “pure” behavior of the sum of individual effects caused by the le Chatelier Principle (originally thought to apply only to chemical equilibrium by its discoverer) which states that additions of or changes to proportions of any agent in a system in equilibrium will cause the system to react to resist a change in the existing equilibrium condition. Newton’s 3rd law of motion, back emf in an electric motor being started up, the action of price on an increase in supply, Willis’s climate governor…..
Projected temperatures from models are 300% higher than observations proved to be. The obvious improvement is to include the Principle in the parameters. Even though ultimately terribly wrong because it doesnt pick up on inflections to a new regime of cooling, they would have had the ‘short’ term “climate” come out fairly reasonably – like its sister weather forecasts that are reasonably good for a week. However, because the objective is to provide rationale for overturning a free market economic paradigm and democracy rather than the best objective judgement of future climate, they would never multiply their finding by 0.333.
When the models are fundamentally flawed no amount of free parameter tweaking will ever produce a satisfactory fit.
“One model, for example, may better simulate sea ice while another model excels in cloud simulation.”…….
Oh for crying out loud…..some got a lucky guess….they can’t tell what led up to it…or what happened after….or what effect it had
….that means they are all garbage
Models are very useful for bamboosaling politicians and for stampeding the science-challenged public into acceptance of laws and regulations that make them less safe, less efficient, and less comfortable; raise the costs of what they do and consume; and to grant funding and otherwise continue financially to support waste and bloat.
In other words, Climate Scientists are doing art, not science.
pochas94
Kindergarten art as far as I can gather.
Quick answer – Not very…
Wiggling an elephant’s trunk
How many climate model parameters are there?
How many of those parameters are significant???
Enrico Fermi told Freeman Dyson
Getting the elephant’s trunk to wiggle
See The elephant’s trunk wiggle with 4 complex parameters.
https://www.youtube.com/watch?v=KfNPAXplLbc
Drawing an Elephant with Four Parameters – Univ East Anglia
http://theoval.cmp.uea.ac.uk/~gcc/projects/elephant/
Paper: Drawing an elephant with four complex parameters Mayer et al. 2009
Code To wiggle an elephant’s trunk
http://www.physics.utoronto.ca/~phy326/python/vonNeumann_elephant.py
“One model, for example, may better simulate sea ice while another model excels in cloud simulation”
LoL. Everyone knows no model excels in cloud simulation. Even the IPCC says that most of the uncertainty is because of clouds.
I couldn’t continue reading after that.
Any time you have to do tuning, that is an admission of failure either of resolution or or of the physics behind the model. In the case of climate models both will always apply. These guys just dont get it . The models will NEVER be good enough.
I’ve made the point repeatedly that the emphasis for accuracy – at least as portrayed to the public – is global surface temps (e.g., Hansen’s predictions). Regional temps and other parameters (cloud cover, precipitation, etc) can be garbage as long as that particular one is “right.”
So the entire “evaluation” seems to be based on opinions about what variables go into the model and not at all on comparison to the results?
I don’t know what this is but it sure ain’t science.
“In other words, there isn’t a single, completely objective definition of what makes a ‘good’ climate model,”
There you have it. They can’t even tell what it is they are trying to do, except maybe get more and bigger research grants.
Climate models would probably work much better if they used a low climate sensitivity (around 0.7 C/doubling) and included solar modulation of cloud cover and the ~60 year thermohaline quasicycle.
Unfortunately if they did that they’d prove CAGW isn’t happening and they’d be defunded.
It’s a Catch 22 situation for the modellers.
“Climate – It’s Complicated” – for those not having time to read the article
At least they are starting to assess the models.
The fact that their predictions nearly always lie on one side of the actual observations should be a warning sign that something is seriously wrong with them.
Honestly, any engineer who constructed models which were like that, and then claimed they were useful would be professionally reprimanded.
No common definition of a climate model ? So how was the” science is settled ” mantra arrived at ?
Oh yeah … because a politician and former tobacco farmer says so .
Sheer lunacy to think climate forecasting is is a cake recipe . How did this $Trillion con game even get this far ? Climate changes and it’s warming . Good .
If a model relies on parametric coefficients that are adjusted based on the overall model performance and not independent controlled experiments, the model ceases to be a physics model and instead is an exercise in mathematical curve fitting. Curve fitting is only valid between the actual data points the equation is calculated against and has very little predictive skill in extrapolation. See Von Neumans statement regarding parameters.