Survey tries to assess the usefulness of climate models

From the INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES and the “all models are wrong, some might be useful” department.

Researchers work toward systematic assessment of climate models

A research team based at the Pacific Northwest National Laboratory in Richland, Wash., has published the results of an international survey designed to assess the relative importance climate scientists assign to variables when analyzing a climate model’s ability to simulate real-world climate.

The results, which have serious implications for studies using the models, were published as a cover article in Advances in Atmospheric Sciences on June 22, 2018.

In assessing climate models, experts typically evaluate across a range of criteria to arrive at an overall evaluation of the model’s fidelity. They use their knowledge of the physical system and scientific goals to assess the relative importance of different aspects of models in the presence of trade-offs. Burrows et al. (2018) show that climate scientists adjust the importance they assign to different aspects of a simulation depending on the science question the model will be used to address. Their research also shows that expert consensus on importance differs across model variables. CREDIT
Advances in Atmospheric Sciences

“Climate modelers spend a lot of effort on calibrating certain model parameters to find a model version that does a credible job of simulating the Earth’s observed climate,” said Susannah Burrows, first author on the paper and a scientist at the Pacific Northwest National Laboratory who specializes in Earth systems analysis and modeling.

However, Burrows noted, there is little systematic study on how experts prioritize such variables as cloud cover or sea ice when judging the performance of climate models.

“Different people might come to slightly different assessments of how ‘good’ a particular model is, depending to large extent on which aspects they assign the most importance to,” Burrows said.

One model, for example, may better simulate sea ice while another model excels in cloud simulation. Each scientist must strike a balance between their competing priorities and goals–a difficult thing to capture systematically in data analysis tools.

“In other words, there isn’t a single, completely objective definition of what makes a ‘good’ climate model, and this fact is an obstacle to developing more systematic approaches and tools to assist in model evaluations and comparisons,” Burrows said.

The researchers found, from a survey of 96 participants representing the climate modelling community, that experts took specific scientific objectives into consideration when rating variable importance. They found a high degree of consensus that certain variables are important in certain studies, such as rainfall and evaporation in the assessment of the Amazonian water cycle. That agreement falters on other variables, such as how important it is to accurately simulate surface winds when studying the water cycle in Asia.

Understanding these discrepancies and developing more systematic approaches to model assessment is important, according to Burrows, since each new version of a climate model must undergo significant evaluation, and calibration by multiple developers and users. The labor-intensive process can take more than a year.

The tuning, while designed to maintain a rigorous standard, requires experts to make trade-offs between competing priorities. A model may be calibrated at the expense of one scientific objective in order to achieve another.

Burrows is a member of an interdisciplinary research team at PNNL working to develop a more systematic solution to this assessment problem. The team includes Aritra Dasgupta, Lisa Bramer, and Sarah Reehl, experts in data science and visualization, and Yun Qian, Po-Lun Ma, and Phil Rasch, climate science experts.

To help climate modelers understand these trade-offs more clearly and efficiently, the visualization researchers are building interactive, intuitive visual interfaces that allow modelers to summarize and explore complex information about different aspects of model performance.

The data scientists are working to characterize expert climate model assessment in greater detail, building on the findings from the initial survey. Eventually, the researchers aim to blend a combination of metrics with human expertise to assess how well-suited climate models are for specific science objectives, as well as to predict how frequently experts will agree or disagree with that assessment.

“[We plan] to combine the best of both worlds, using computing to reduce manual effort and allowing scientists to more efficiently apply their human insight and judgment where it is most needed,” Burrows said.

###

Here is the paper: https://link.springer.com/article/10.1007%2Fs00376-018-7300-x

Characterizing the Relative Importance Assigned to Physical Variables by Climate Scientists when Assessing Atmospheric Climate Model Fidelity

Abstract

Evaluating a climate model’s fidelity (ability to simulate observed climate) is a critical step in establishing confidence in the model’s suitability for future climate projections, and in tuning climate model parameters. Model developers use their judgement in determining which trade-offs between different aspects of model fidelity are acceptable. However, little is known about the degree of consensus in these evaluations, and whether experts use the same criteria when different scientific objectives are defined. Here, we report on results from a broad community survey studying expert assessments of the relative importance of different output variables when evaluating a global atmospheric model’s mean climate. We find that experts adjust their ratings of variable importance in response to the scientific objective, for instance, scientists rate surface wind stress as significantly more important for Southern Ocean climate than for the water cycle in the Asian watershed. There is greater consensus on the importance of certain variables (e.g., shortwave cloud forcing) than others (e.g., aerosol optical depth). We find few differences in expert consensus between respondents with greater or less climate modeling experience, and no statistically significant differences between the responses of climate model developers and users. The concise variable lists and community ratings reported here provide baseline descriptive data on current expert understanding of certain aspects of model evaluation, and can serve as a starting point for further investigation, as well as developing more sophisticated evaluation and scoring criteria with respect to specific scientific objectives.

0 0 votes
Article Rating
87 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Joe Wagner
June 27, 2018 1:30 am

Another roll call rather than compare things to the Real World.

Trevor
Reply to  Joe Wagner
June 28, 2018 2:33 am

Joe Wagner :
“Another roll call rather than compare things to the Real World.”
YES ! SPOT ON !
When asked about a career choice regarding ” a Political correctness study”
in which the questioner asked him “should he tell the TRUTH and RISK
losing the University job OR play along , get the job and then SPEAK OUT ?”
Psychologist Prof.Jordan Peterson replied :
“Tell the truth ALWAYS or you will screw up the ONE
THING THAT YOU CAN RELY ON….yourself .
Always TELL THE TRUTH and THEN ASSUME that the OUTCOME
will ALWAYS BE THE BEST POSSIBLE OUTCOME !
NEVER COMPROMISE YOURSELF ! ”
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
I wonder HOW MANY SCREWED UP “CLIMATOLOGISTS” THERE ARE ????
Probably MANY more than the number of SCREWED UP CLIMATE MODELS !

Eric Stevens
June 27, 2018 1:52 am

Surely a usefully reliable model should come reasonably close to reality in all major aspects. There is no point in using a model which, for example, reasonably accurately predicts arctic ice but doesn’t come within a bull’s roar of predicting the monsoon winds or ENSO or North American rainfall etc.

Phoenix44
Reply to  Eric Stevens
June 27, 2018 2:26 am

True but it slightly misses the point. The model is not getting the arctic ice right as such, it is being tweaked so that it produces past and current ice levels. That doesn’t mean it will get future ice right, because the tweaks are simply fudge factors.

What the models tell us is that the assumptions and first principles being put into models are not correct, because the do not model the current and past climate correctly.

Would you get into a rocket to the Moon based on a model that needs a few tweaks to simulate you getting there?

Donald Horne
Reply to  Phoenix44
June 27, 2018 8:50 am

i.e., GIGO.

Reply to  Phoenix44
June 27, 2018 9:02 am

Adjusting parameters in a model of a multi-order, coupled, non-linear chaotic system to get the past “right”, and expecting that to mean that predictions / forecasts / projections from the model for the future are going to be “right” is sort of silly. (I’m trying to be nice here.)

rocketscientist
Reply to  Phoenix44
June 27, 2018 9:04 am

Well, since you mention it, almost all space vehicle flight paths are corrected in flight with minor tweaks called “mid-course corrections”. These involve propulsion burns to orient the vehicle and make small alterations to the velocity and thrust vector. These are needed due to the variability of the atmosphere, gravitational forces and other factors which cannot be instantaneously measured, but whose effects can be measured after the fact. Other reasons are to account for manufacturing variability in mass, CG or in propulsion performance. We account for these factors by incorporating “margin” in the designs. Such margins would be like carrying more fuel than your minimum calculations allowed for.

However the precautionary principle of always having more is NOT a possibility for all systems. The rockets do need to lift off.

MarkW
Reply to  rocketscientist
June 27, 2018 9:16 am

“The rockets do need to lift off.”

Picky, picky

Reply to  Eric Stevens
June 27, 2018 2:59 am

Surely a usefully reliable model should come reasonably close to reality in all major aspects.

A model is, by definition, not the real thing, and its very utility is that it leaves out or simplifies much in order to actually BE useful.

Inevitably that means a model that is useful in one context is useless in another.
E.g a map of roads doesn’t tell you where the wildflowers are, and a naturist’s map doesn’t tell you where the police station is..

The only thing that does is the world itself in which all things are.

The only accurate model of the world is the world itself.

Now there is a school of thought that says that since the world is a function of natural laws and cause and effect the world can be described (modeled) by a much smaller information set. Just the natural laws. I.e the world is algorithmically losslessly compressible, information wise.

That is, we can describe the world’s behaviour entirely as a set of (partial) differential equations with respect to time. Like Newton’s laws.

Unfortunately this leads us directly into the starting condition problem. In order to exactly predict the course of events in the world we would need to know all the positions and states of everything in the Universe before we started the integration.

Unfortunately it transpires that the information contained in the universe – even it all its natural laws were known – is as big as the universe itself, so to speak. That is you would need another universe to predict which way this one will go. The world it turns out is not algorithmically losslessly compressible, in the final instance.

Take the asteroid problem. In an ideal case we would know where every single one was, its position and velocity and would solve the many-body problem to predict which one would be a danger in say 1000 years and get a spacecraft ready to visit it and nudge it off course enough to be safe.

In reality not only do we not know the position and velocity of many potentially dangerous asteroids, even if we did the solution to a million body problem would require a solar system sized computer to be able to predict that far ahead. We are heading into chaos maths here.

Sadly most scientists and most lay people are 50 years behind the curve, and still think that science is capable of exactly describing and exactly predicting the future.

In reality it is very very far from that, and will always be. Leading to an interesting political and managerial insight: We should plan for contingency, not seek to predict the outcome, or worse, to attempt to control it.

Reply to  Leo Smith
June 27, 2018 4:15 am

Leo Smith

I have always believed science, is an inexact science, when applied to the real world. The example in the current debate is, of course, CO2. In a clinical lab environment we can fairly accurately understand the properties of a single molecule of the gas, but toss it into the atmosphere and all bets are off.

Apart from water vapour, there are particulates, man made or otherwise, other gases, winds, precipitation and, of course, clouds, and sunlight itself, which might dilute, or exacerbate the impact that single CO2 molecule has on our environment.

I suspect solving the asteroid conundrum would be easier than following a single molecule of CO2 throughout it’s lifetime to understand precisely what it does, under every condition it encounters. Then one has to start on the other 399 ppm molecules, each of which takes a different path over their lifetimes.

But then, I’m not a scientist, I’m a layman, who is astonished at the claims made by ‘climate scientists’ over the last 40 years, none of which have manifested themselves. These wild ‘scientific’ predictions are now having a damaging effect on science itself, I’m not the only layman sceptical of climate predictions, but that rolls over into other sciences as well.

Hopefully, Trumps gradual dismantling of the climate science apparatus that operates within US government departments to their own agenda, will redress the the balance and see the restoration of faith in science by everyone.

Reply to  HotScot
June 27, 2018 6:01 am

The problem is that the science may be exact, but it still doesn’t help predict the future.

The integration of a perfect differential equations is not only dependent on the equation itself, but on starting conditions.

E.g the acceleration of a mass is uniquely given by the formula F=ma. So does that tell us how fast it will be going after a certain period of known thrust?

Nope. That also depends on how fast it was going to start with…

It gets worse with non linear equations. For example a plane, given some reverse thrust might slow down a bit and start to descend, or if its speed fell below stall speed, spin out of control and crash. The outcomes are so radically different…because in the end the equations of flight are very non linear.

Or take a real ‘edge case’

Due to a butterfly flapping its wings in a Brazilian rain forest, a gust of wind managed to deflect the bullet that killed President Kennedy enough, so he survived…

Nothing wrong with the physics. Just the data isn’t accurate or complete enough.

Us engineers are, like snipers, so very accustomed to operating not on exactitudes, but within limits. We build machines that as far as possible exhibit linear or near linear behaviour, within limits, and we strive to keep them there. So our machines are predictable.

These machines form the environment in which most people live today, the urban/suburban landscape.

These are the snowflake generation – everything around them is man made, controlled, predictable.

They can’t actually deal with Nature in the raw.

They believe Nature can be easily controlled.

Edwin
Reply to  Leo Smith
June 27, 2018 6:29 am

Leo, since a friend in the game first explained some of the early models to me I have pondered a basic question. How does one model an open ended Chaotic system even with known and accurate starting conditions, a system influenced by unknowable variables outside the immediate environment, e.g., unpredictable fluctuations in the sun? Modeling an atomic explosion, the reason supercomputers were invented, had an end time component. When modeling the flight characteristics for a successful aircraft it would seem one would not continue using a design that always ended in either instability or structural failure or a model that always produced such a design.

philo
Reply to  Edwin
June 27, 2018 9:28 am

Aeronautics is much simpler than the climate but the engineers still use multiple methods to design and build airplanes. Fluid dynamic modelling is useful but physical models of planes are almost always built and tested in a wind tunnel. Fluid dynamics models still have trouble with boundary condititions- when does the plane stall, when does flutter start. Engineering models still have trouble with fatigue. One of the reasons carbon fiber, despite it’s cost, is being used more is that is orders of magnitude better that crystaline metals in fatigue. The cost is well worth the assurance that the airframe, with a useable life of a couple hundred years, will be way obsolete in maybe 50 but it won’t break up in the air.

And in the end, the hand-picked flight crew still takes the new plane up for it’s first flight, and it still goes through rigourous static testing and flight testing to make sure it flys like everybody thought it would.

MarkW
Reply to  philo
June 27, 2018 11:02 am

Carbon fiber is also a lot lighter than a piece of aluminum of similar strength.
Carbon fiber was first used on planes in the for non-structural components.

Reply to  Edwin
June 27, 2018 11:55 am

Edwin: the short answer is we don’t.

Any machine that has built in chaotic behaviours is useless unless its kept out of the chaotic region. Or we can ignore the chaotic behaviour by having some other constraint.

This is cutting edge modelling and its way not good enough – Formula 1 teams use CFD (computational fluid dynamics) but they don’t rely on them – they still use wind tunnels. And finally they test on track.

We simply do not have the techniques to do what the climate modellers claim to be able to do.

They simply lump huge areas together and ‘parameterize’ them. Usually with little or no justification.

Edwin
Reply to  Leo Smith
June 27, 2018 12:40 pm

Thanks Leo, Formula One teams spend a lot of money on design and computer modeling since their non-race weekend testing has been reduced. On race weekend you will see a lot of techniques still trying to determine if they have the correct design, e.g. pitot tube arrays and various liquids sprayed on the car to determine flow. I still haven’t figure out how all the winglets help but that is for another day.

Edwin
Reply to  Leo Smith
June 27, 2018 6:17 am

Leo, In other words computer models are a small imitation of the real thing.

We know that when trying to model any Chaotic system what starting conditions one uses, even down to the decimal, determines the ultimate results which can vary dramatically.

Ciphertext
Reply to  Leo Smith
June 27, 2018 8:04 am

“We should plan for contingency, not seek to predict the outcome, or worse, to attempt to control it.” — Leo Smith

Well said.

June 27, 2018 1:54 am

I hope they can give an objective level of usefulness compared to a chocolate teapot.

PTP
Reply to  son of mulder
June 27, 2018 8:21 am

That would be a question of, “Usefulness to whom?”

The usefulness to the modelers can be calculated by a very precise mathematical equation.

Model’s Predictions of Disaster > Reality = $$$

Chris Wright
June 27, 2018 1:56 am

““Climate modelers spend a lot of effort on calibrating certain model parameters to find a model version that does a credible job of simulating the Earth’s observed climate,”

This pretty well sums up what’s wrong with climate models. Tuning arbitrary parameters in order to get a better fit has nothing to do with understanding how the climate works. It’s simply a sophisticated – and ruinously expensive – form of curve fitting, in the hope that a better curve fit will give better predictions. Almost certainly it won’t – otherwise pretty well anyone could make a fortune on the stock markets.

If the models predict past climate with magical accuracy but fail to predict the future, then it’s a clear indication that they have indeed been adjusted to match historical data. Of course, this is exactly what happened: over thirty years the models predicted far more warming than actually occurred.

A frequently quoted “proof” of AGW is based on the models – with CO2 they magically match historical data, but without it they don’t, thus proving AGW. But of course if the models have been adjusted / tuned then the proof is instantly invalidated. To claim this as proof – as the IPCC does – is not just wrong and dishonest, it is close to fraudulent.
Chris

Phoenix44
Reply to  Chris Wright
June 27, 2018 2:28 am

Exactly, It would be perfectly possible to tweak the model to produce the claimed warming without using CO2 as the driver. Just change an assumption or two.

It is literally nonsensical to claim that CO2 works as the only possible driver, if you let me tweak the model to show that CO2 is the only possible driver.

Reply to  Phoenix44
June 27, 2018 6:30 am

I actually did this some years back – tracking temperature rise to a ten year delayed rise in commercial air traffic. (Think contrails, not chem-trails!!!)

It actually fitted better than CO2 did even into the pause…

Reply to  Chris Wright
June 27, 2018 6:23 am

Actually those very same models disprove AGW.

In order for CO2 to be the dominant agency of modern climate change, you need – using their feedback system – to have an equation of the form

ΔT= λ .k.log(ΔCO2)

Now k.log(ΔCO2) is what the strict physics says. That temperature (all other things being equa) will rise perhaps 0 .4 degrees C for every doubling of CO2.

Patently this didn’t match late 20th century temperature rises so they ‘invented’ a mysterious ‘positive feedback’ whose fiddle factors is lambda ( λ )

This, when fiddled with enough, gave then climate sensitivity of the right sort of order to match late 20th century rises – so around 1-3 decrees C rise per doubling of CO2.

And that was scary stuff.

Then in the 21st century disaster struck. CO2 was marching upward steadily is if nothing had happened, but temperatures simply were not.

For a while they could witter on about ‘natural causes’ etc etc. But after 17 years or so really the excuses are a bit thin. Anything powerful enough to stop 3 degrees C per CO2 doubling pretty much dead in its tracks is probably also significant enough to explain all late 20th century warming without the need for ‘positive feedback’ .

If you understand statistics, which I pretty much don’t, what happened with the ‘pause’ is that the correlation between CO2 rise and temperature broke down completely, to the point where its arguable that the ‘correlation’ is almost entirely random. The data pretty much proves that CO2 has little or no impact on global temperatures compared to other stuff that we don’t yet understand. Which night, or might not, be man made.

However ice ages and interglacials and Mediaeval/Roman warm periods and Holocene Optima strongly suggest that climate has varied a heck of a lot more than one or two degrees in the past 50,000 years.

Their formulae simply do not fit the data. Their formula by failing to fit, DISPROVE AGW.

Hoist by their own petards….and many many experienced and competent scientists have noticed this, and quietly shuffled off the AGW bandwagon.

Which only exists now in non scientific fraternities as a piece of commercial and political faux news.

PTP
Reply to  Leo Smith
June 27, 2018 8:30 am

I’ve noticed that with as much mathematical modeling, forecasting, and compilation of complex data sets, as is involved, relatively few climate scientists have a solid background in statistics.

Reply to  Leo Smith
June 27, 2018 10:33 am

Leo Smith

Beautifully explained, even for a layman like me.

Thank you.

Richard S Courtney
Reply to  Leo Smith
June 27, 2018 10:46 pm

Leo Smith:

You rightly say of climate modelers,
“… they ‘invented’ a mysterious ‘positive feedback’ whose fiddle factors is lambda ( λ )”

YES! And the UN’s Intergovernmental Panel on Climate Change (IPCC) says – as you say, that this disproves anthropogenic (i.e. human made) global warming (AGW) as emulated by the climate models because the models predict the feedback must create the ‘tropospheric hot spot’.

The absence of the ‘tropospheric hot spot’ demonstrates that the models are complete failures as scientific emulations of physical reality. Their only “success” is in generation of computer games that promote a political ideology.

The ‘tropospheric hot spot’ is warming at altitude that is between two-times and three-times the warming at the surface in the tropics. It is clearly explained by the UN Intergovernmental Panel on Climate Change (IPCC) in Chapter 9 of IPCC WG1 AR4 and specifically Figure 9.1.

The IPCC Chapter can be read at
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
and its Figure 9.1 is on page 675.

Importantly, the text says,
“The major features shown in Figure 9.1 are robust to using different climate models.”

The Figure caption says;
“Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from
(a) solar forcing,
(b) volcanoes,
(c) well mixed greenhouse gases,
(d) tropospheric and stratospheric ozone changes,
(e) direct sulphate aerosol forcing and
(f) the sum of all forcings.
Plot is from 1,000 hPa to 10 hPa (shown on left scale) and from 0 km to 30 km (shown on right). See Appendix 9.C for additional information. Based on Santer et al. (2003a).”

The tropospheric ‘hot spot’ is the big, red blob that is only seen in Panels (c) and (f) of Figure 9.1.

In other words, the ‘hot spot’ is a unique effect of “well mixed greenhouse gases” predicted by the PCM models the IPCC approves. And that effect is so great that the models predict it has overwhelmed all the other significant forcings.

But the ‘hot spot’ has not occurred, and this is indicated by independent measurements obtained by radiosondes mounted on balloons (since 1958) and by MSUs mounted on satellites (since 1979).

The ‘hot spot’ is so large an effect that it should be clearly seen if the models provide a representation of model climate change as it exists in the real world. And the warming from “well mixed greenhouses gases” has been greatest most recently in the modelled period so should be very obvious in the radiosonde and MSU data. Simply, the ‘tropospheric hot spot’ is absent from the real-world observations.

In other words,
IF ONE BELIEVES THE IPCC THEN THE ABSENCE OF THE ‘HOT SPOT’ IS A DIRECT REFUTATION OF THE AGW HYPOTHESIS AS EMULATED BY THE CLIMATE MODELS.

However, the reason for the ‘hot spot’ is not unique to anthropogenic (i.e. human-made) warming or “well mixed greenhouse gases” and is as follows.
1.
Water vapour is the major greenhouse gas. And the climate models constructed to promote assertions of anthropogenic global warming (AGW) assume that as temperature increases so will the amount of water vapour held in the atmosphere.
2.
CO2 is also a greenhouse gas so increased CO2 in the air increases radiative forcing to increase temperature.
3.
The models assume increased temperature induced by increased atmospheric CO2 increases the amount of water held in the atmosphere (because of point 1).
4.
But water vapour is the main greenhouse gas so radiative forcing is increased a lot by the increased amount of water the models assume is held in the atmosphere as a result of increased atmospheric CO2.
5.
The large increase to radiative forcing from the increased amount of water held in the atmosphere increases the temperature a lot.

Points 1 to 5 are known as the Water Vapour Feedback (WVF).
The direct effect on global temperature from a doubling of CO2 in the air would be about 1 deg.C. And (according to e.g. the IPCC) the effect of the WVF is to increase this warming to between 3 and 4.5 deg.C.

Clearly, there are large assumptions in calculation of the WVF: this is undeniable because the range of its calculated effect is so large (i.e. to increase warming of ~1 deg.C to a warming in the range 3 to 4.5 deg.C).

One of the assumptions is how much water vapour is held in the atmosphere and where it is distributed. Large effects of the WVF are induced by assumption of large increase to water vapour at altitude.

The major radiative forcing effect is at altitude in the tropics because
(a) long wave radiation is from the Earth’s surface,
(b) emission of the radiation is proportional to the fourth power of the surface temperature,
(c) the surface temperature is hottest in the tropics, and
(d) cold air holds little water vapour.

Temperature decreases with altitude and, therefore, the ability of the atmosphere to hold water vapour decreases with altitude. So, small increase to temperature with altitude permits the air at altitude to hold more water. And, therefore, enables WVF at altitude.

The increase to WVF with altitude causes largest increase to radiative forcing (so largest increase to temperature) at altitude. And the radiative forcing effect is strongest in the tropics so the largest increase to temperature at altitude is in the tropics.

This ‘largest increase to temperature at altitude in the tropics’ is the ‘hot spot’. But the ‘hot spot’ is missing.

This could be because
(i) the assumption of WVF is wrong,
or
(ii) the calculated increase to radiative forcing of CO2 and/or water vapour is wrong,
or
(iii) the calculated ability of air to hold water vapour is wrong,
or
(iv) something else as yet unknown.

Whichever of these is true, it is certain that the absence of the ‘tropospheric hot spot’ is conclusive evidence that
Climate models fail to represent observed climate changes.
Or
There has been no global warming from “well mixed greenhouse gases”.
Or
There has been no global warming from any cause including “well mixed greenhouse gases”.

In other words, climate models predicting global warming are complete failures as scientific emulations of physical reality, and their only “success” is in generation of computer games that are used to promote a political ideology.

Richard

richard
June 27, 2018 2:18 am

Always good to repeat the lecture from Dr Christopher Essex –

“Believing in Six Impossible Things Before Breakfast, and Climate Models”

Climate models starts at- 25.16

TDBraun
Reply to  richard
June 27, 2018 7:38 am

That was good.
“There are no experts on what nobody knows.”
“Even when you don’t put garbage in, you can still get garbage out.”

tty
June 27, 2018 2:19 am

“A model may be calibrated at the expense of one scientific objective in order to achieve another.”

Having some experience with computer models (in another field than climate) I can tell that this is not “tuning” it is faking. Tuning means adjusting one or more variables where you only have approximate, not exact, data, and when done correctly improves all aspects of the model.
Incidentally tuning really isn’t practicable for more than two or possibly three variables. The number of possible combinations becomes literally infinite.

Reply to  tty
June 27, 2018 5:29 am

Have you considered using a neural network to tune parameters? It works well if the model runs in a reasonable amount of time, because multiple runs are needed to train the network.

tty
Reply to  Fernando L
June 27, 2018 7:13 am

“The problem is that for multiple variables you almost always have multiple different solutions that fit the problem well.”

Exactly. They will all fit the training data, but only one (or none) of them is physically correct. If you have a lot of data you can do the tuning on only part of the data and then verify whether it will also fit the other part, for which it wasn’t tuned. Unfortunately there is usually not enough data to do that.

don k
Reply to  tty
June 27, 2018 8:01 am

And even if there is enough data to train and test, the problem of uncontrolled variables remains. For example — How to handle vulcanism in a climate model. It would all be very well that you can “predict” the past if you know enough about past volcanic activity. But without clairvoyance, you still couldn’t predict the future.

MarkW
Reply to  don k
June 27, 2018 9:23 am

As long as the eruption isn’t big enough to tip us into an ice age, the impact of a volcano will dissipate over a few months to years. So it is safe to ignore transient events when making a prediction.

DaveS
Reply to  tty
June 27, 2018 10:08 am

That was our approach to using process models; calibrate using year 1 data, validate using year 2 data. Data availability (or lack thereof) was a problem at times.

Ben of Houston
Reply to  tty
June 27, 2018 5:34 am

It’s not literally infinite. Assuming standard byte length, the number of possibilities are (2^16) ^(Number of Variables). So for 2 variables, it’s ~1 billion possibilities and for 4, it’s ~1 quadrillion.

The problem is that for multiple variables you almost always have multiple different solutions that fit the problem well. If it’s not perfect, different solutions will fit different parts of the data much better. This often indicates that you don’t have the wrong factors, but the wrong equation.

For example, the joke pirates-stop-global-warming correlation actually does a good job of modeling the cooling of the 40s and the “pause” in the early 2000s, as these correlated with the WW2 shipping raids and the Somali piracy incidents. To compare, CO2 does exceptionally poorly with these despite having a physical basis.

Gary Pearse
Reply to  Ben of Houston
June 27, 2018 6:22 am

An oldie is the correlation of the height of the hem of women’s skirts above the floor in fashions to the price of copper.

PTP
Reply to  Gary Pearse
June 27, 2018 8:45 am

That one could actually demonstrate a real world connection.

Its not that difficult to look through the history of many cultures, and chart trends in the relative modesty of women’s fashion, closely tracking economic conditions.

Considering how the dynamics of mating behaviors in a population, tend to be governed by economic laws, Supply and Demand for example, its not surprising that women would tend to dress less modestly, the more wealthy are the men for which they are competing.

MarkW
Reply to  Ben of Houston
June 27, 2018 9:25 am

Ben, what you have forgotten is that a variable isn’t a binary condition. Either there or not there.
These are analog values. A value could be 1.0, or it could be 1.1, or it could be 1.11. Any value within the limits is possible.

D. J. Hawkins
Reply to  MarkW
June 27, 2018 10:17 am

;

Sorry, he did take that into account. He said “Assuming standard byte length…”; that gets you 65,536 levels for your variable. With 2 variables, the actual solution set covers 4,294,961,296 possible outcomes. Four billion, not one billion as he originally said, assuming his algorithm is correct.

MarkW
Reply to  D. J. Hawkins
June 27, 2018 11:06 am

65,535 is the maximum for a bit 16 computer.
Standard byte length for desk tops has been 32 bits for well over a decade.
For scientific machines it’s closer to 128 or even 256 bits.

If you go floating point, you can get even more precision.

philo
Reply to  tty
June 27, 2018 9:50 am

It doesn’t matter whether or how much a model is tweaked. Any mathematical model is only valid over the range it was tested for. The simplest example I learned in first year physics. Given a bunch of data points you can model what is happening with several different curves. Tensile strength- you can test and measure the stress and strain over a certain range and vet very repeatable results. You can model the results with any number of math equations. But none of those models can predict what the the actual breaking strength. None of the models are valid when the sample starts to fail because the way materials fail generally has nothing to do with the way materials respond to stress before they break.

So you can draw any curve you want, or any model results you want, but they mean nothing outside the range of testing. No climate models have ever been actually tested against reality. The earliest work, Svante Arrhenius, has been tested and was woefully inaccurate.

Phoenix44
June 27, 2018 2:23 am

“Climate modelers spend a lot of effort on calibrating certain model parameters to find a model version that does a credible job of simulating the Earth’s observed climate.”

That is sugar-coating it. “Parameters” are just assumptions. What this says is that models built from “first principles” and using assumptions that match climate science’s assumptions about climate, cannot even get the past and the current climate right.

The idea that then changing those assumptions and principles so that you can model the current climate will somehow mean you have got the future right is utterly absurd.

The models are nowhere near right. They just are not.

commieBob
June 27, 2018 2:33 am

If you study a large number of things about a system, it is probable that you will find false positive correlations. example

It is quite possible that a particular model will appear to accurately describe the sea ice between Iceland and Greenland. It will be a false positive because you’re basically asking the model a zillion questions and it accidentally gets some of them right. That means that if you try to understand physical processes using the model you will get no useful answers.

Reply to  commieBob
June 27, 2018 3:07 am

It is entirely possible that every single thing we think we know in physics is just accidental temporal and local.

We cannot know beyond our experience.

And that is always very limited…

commieBob
Reply to  Leo Smith
June 27, 2018 5:15 am

What you say is absolutely true.

Everything useful is an approximation. If I were to try to model a simple circuit by modeling the behavior of every single electron, I would get garbage. For practical purposes Ohm’s Law works just fine though.

In light of the above, there’s reason to believe that CM’s Irreducibly Simple model outperforms the GCMs.

Alan Tomalty
Reply to  commieBob
June 27, 2018 10:15 am

This one sentence illustrates all what is wrong with the whole global warming fiasco

“The paper, Why models run hot: results from an irreducibly simple climate model, by Christopher Monckton of Brenchley, Willie Soon, David Legates and Matt Briggs, survived three rounds of tough peer review in which two of the reviewers had at first opposed the paper on the ground that it questioned the IPCC’s predictions.”

ЯΞ√ΩLUT↑☼N
June 27, 2018 2:42 am

Maybe they’d be just as accurate forecasting the swirls on a soap bubble. I’m not holding my breath.

June 27, 2018 2:58 am

Tweak and adjust them all you want, but these toy climate simulators will never express the movement of heat like the real atmosphere and oceans. From the abstract: “Evaluating a climate model’s fidelity (ability to simulate observed climate) is a critical step in establishing confidence in the model’s suitability for future climate projections, and in tuning climate model parameters.” OK, which model will have sufficient fidelity to the observed conditions so as to numerically reproduce a one-inch-per-hour rate of rainfall? Such a precipitation rate implies upward heat movement of about 16,000 W/m^2, perhaps to an altitude of 30,000 feet or higher, involving multiple phase changes of water, the powerful natural refrigerant of overwhelming influence on heat transport. Does your improved toy do that? I didn’t think so.

Marcus
Reply to  David Dibbell
June 27, 2018 4:28 am

I wonder if any of the “Climate Models” predicted this ?? LOL
June 26th 2018
“Summer SNOW wallops Newfoundland, see the misery HERE”

https://www.theweathernetwork.com/news/articles/newfoundland-late-june-snow-dangerous-travel-gusty-winds-gander-deer-lake-badger/105259/

http://rstorage.filemobile.com/storage/33060138/15

MarkW
Reply to  David Dibbell
June 27, 2018 9:27 am

My model predicted that you would say that.

Wiliam Haas
June 27, 2018 3:47 am

If these people really knew what they were doing they would have only a single model with no fudge factors in it but they do not. The fact that there are so many different models under consideration is evidence that a lot of guess work has been involved.

Wiliam Haas
June 27, 2018 3:49 am

Such a survey is polyticts and not science’

Dr. S. Jeevananda Reddy
June 27, 2018 3:49 am

Present day Climate Modelers use computers that absorb huge energy, that is responsible for hoge quantity of CO2 release. With the end result of zero???

Dr. S. Jeevananda Reddy

June 27, 2018 4:58 am

Modelling naychur’s complexity, things go asstray.
comment image

2hotel9
June 27, 2018 5:04 am

They can not accurately predict weather in a 72 hour time frame, much less accurately predict climate or sea levels in a 100 year time frame, or model past climate or sea level with accuracy, so no, they are not useful in the real world. They are quite useful in pushing a political agenda and in spreading alarm and panic in much of the population, so these models will continue to be used.

Komrade Kuma
June 27, 2018 5:04 am

Climate Models = 21st century tea leaves
Climate Scientists = 21st century shamans

Reply to  Komrade Kuma
June 27, 2018 7:42 am

That is extremely insulting to shamans.

As postmodern post truth techno-shaman in a shamanic cult of a single member, I resent that!”

PTP
Reply to  Komrade Kuma
June 27, 2018 7:59 am

I think I’d prefer the tea leaves, those climate models are more like a Magic 8-Ball, with only one possible response on the inside.

The models are programmed to assume CO2 climate forcing, so surprise surprise, they show CO2 climate forcing.

Alasdair
June 27, 2018 5:13 am

I treat the term scientist with great suspicion these days. This article is one of the reasons why.

Before you look at complex climate models you need to consider chaos theory and a real scientist would do that. These people obviously have not; so may best be described as pseudo scientists and treated accordingly.

A word of advice for them: Consider a pendulum and write a model that predicts the position of the weight at the bottom over time. OK that can be done. Now put a joint in the pendulum and do the same thing. OK that can be done but a bit difficult. Now put another joint in and note how difficult it has become. Lastly add in friction, vibrations, temperature, flexibility and external accelerations and see whether prediction is possible.

The upside of this is that the advice is free. The downside, of course, is that no one will pay you to do it. Hence the pseudo element needs to be invoked.

If you have just read this; have another read of the article and ponder.

Pierre
June 27, 2018 5:20 am

My translation of this study is “We shop for bias”. We use what we agree with.

Hivemind
June 27, 2018 5:26 am

The usefulness of models that haven’t been validated?

None whatsoever.

June 27, 2018 5:26 am

The cargo cultists are surveyed for opinions on whether bamboo control tower layout or runway layout is more important.

Yep, that’ll bring those cargo planes loaded with CAGW.
Any day now.

sycomputing
June 27, 2018 5:38 am

I’m not sure why we’re still asking these questions when Hansen “got it right” in the ’80’s.

https://wattsupwiththat.com/2018/06/22/thirty-years-on-how-well-do-global-warming-predictions-stand-up/#comment-2385194

Gary Pearse
June 27, 2018 6:09 am

If a model “works” for one scientific “objective” but gives a total bollocksup on another, then two things are certain. First, you have the coverage of factors incomplete, the weighting of each wrong and the need for fudging parameterizations. Two, the “objective” it works for is temporary and it is destined to bollocksup that one too. This must be perceived by serious modelers and must be a source of frustration.

Corollary: This can only be made worse by a committee.

Randy Stubbings
Reply to  Gary Pearse
June 27, 2018 7:42 am

To use an electricity analogy, it’s as if one model is tuned to get wind generation right but it gets nuclear, coal, hydro, gas, and solar generation all wrong. The next model gets hydro right, but gets the others all wrong. And so on for the other models. NONE of the models are of any use whatsoever in building or operating a real-world power system.

Wharfplank
June 27, 2018 6:38 am

“…apply their human insight and judgement…” and when they’re done for the day they drive home in their EV with the “Leave it in the Ground” bumper sticker.

Bruce Cobb
June 27, 2018 6:56 am

They already know what “the answer” is, they just need to work out how to get there.

June 27, 2018 6:57 am

“Models” for forecasting economic growth have been around since the 1960s and the results have been hopeless.
But all the macroeconomists get roughly the same results so everyone continues to play the game. Governments, big corporations, and even union leaders have their economists making forecasts on GDP.
Essentially Fourier series. Push this and you will get this much growth sort of thing.
Out of the nonsense, there is a line.
When an economist changes a GDP forecast from 3% to 3.25%, they do it to show they have a sense of humour.
Bob Hoye

Gary Pearse
June 27, 2018 6:59 am

Modellers using “physics” leave out a major damping effect on “pure” behavior of the sum of individual effects caused by the le Chatelier Principle (originally thought to apply only to chemical equilibrium by its discoverer) which states that additions of or changes to proportions of any agent in a system in equilibrium will cause the system to react to resist a change in the existing equilibrium condition. Newton’s 3rd law of motion, back emf in an electric motor being started up, the action of price on an increase in supply, Willis’s climate governor…..

Projected temperatures from models are 300% higher than observations proved to be. The obvious improvement is to include the Principle in the parameters. Even though ultimately terribly wrong because it doesnt pick up on inflections to a new regime of cooling, they would have had the ‘short’ term “climate” come out fairly reasonably – like its sister weather forecasts that are reasonably good for a week. However, because the objective is to provide rationale for overturning a free market economic paradigm and democracy rather than the best objective judgement of future climate, they would never multiply their finding by 0.333.

Ed Zuiderwijk
June 27, 2018 7:24 am

When the models are fundamentally flawed no amount of free parameter tweaking will ever produce a satisfactory fit.

Latitude
June 27, 2018 7:27 am

“One model, for example, may better simulate sea ice while another model excels in cloud simulation.”…….

Oh for crying out loud…..some got a lucky guess….they can’t tell what led up to it…or what happened after….or what effect it had

….that means they are all garbage

Nik
June 27, 2018 7:30 am

Models are very useful for bamboosaling politicians and for stampeding the science-challenged public into acceptance of laws and regulations that make them less safe, less efficient, and less comfortable; raise the costs of what they do and consume; and to grant funding and otherwise continue financially to support waste and bloat.

pochas94
June 27, 2018 7:56 am

In other words, Climate Scientists are doing art, not science.

Reply to  pochas94
June 27, 2018 10:46 am

pochas94

Kindergarten art as far as I can gather.

Russ Wood
June 27, 2018 7:59 am

Quick answer – Not very…

David L. Hagen
June 27, 2018 9:26 am

Wiggling an elephant’s trunk
How many climate model parameters are there?
How many of those parameters are significant???
Enrico Fermi told Freeman Dyson

“with four parameters I can fit an elephant, and with five I can make him wiggle his trunk”.

Getting the elephant’s trunk to wiggle
See The elephant’s trunk wiggle with 4 complex parameters.
https://www.youtube.com/watch?v=KfNPAXplLbc
Drawing an Elephant with Four Parameters – Univ East Anglia
http://theoval.cmp.uea.ac.uk/~gcc/projects/elephant/
Paper: Drawing an elephant with four complex parameters Mayer et al. 2009

Code To wiggle an elephant’s trunk
http://www.physics.utoronto.ca/~phy326/python/vonNeumann_elephant.py

Sebastian Magee
June 27, 2018 9:27 am

“One model, for example, may better simulate sea ice while another model excels in cloud simulation”
LoL. Everyone knows no model excels in cloud simulation. Even the IPCC says that most of the uncertainty is because of clouds.
I couldn’t continue reading after that.

Alan Tomalty
June 27, 2018 9:27 am

Any time you have to do tuning, that is an admission of failure either of resolution or or of the physics behind the model. In the case of climate models both will always apply. These guys just dont get it . The models will NEVER be good enough.

Michael Jankowski
June 27, 2018 9:42 am

I’ve made the point repeatedly that the emphasis for accuracy – at least as portrayed to the public – is global surface temps (e.g., Hansen’s predictions). Regional temps and other parameters (cloud cover, precipitation, etc) can be garbage as long as that particular one is “right.”

Jim Whelan
June 27, 2018 11:51 am

So the entire “evaluation” seems to be based on opinions about what variables go into the model and not at all on comparison to the results?

I don’t know what this is but it sure ain’t science.

Walter Sobchak
June 27, 2018 12:40 pm

“In other words, there isn’t a single, completely objective definition of what makes a ‘good’ climate model,”

There you have it. They can’t even tell what it is they are trying to do, except maybe get more and bigger research grants.

June 27, 2018 1:50 pm

Climate models would probably work much better if they used a low climate sensitivity (around 0.7 C/doubling) and included solar modulation of cloud cover and the ~60 year thermohaline quasicycle.

Unfortunately if they did that they’d prove CAGW isn’t happening and they’d be defunded.

It’s a Catch 22 situation for the modellers.

Mr Bliss
June 27, 2018 3:46 pm

“Climate – It’s Complicated” – for those not having time to read the article

Dean
June 27, 2018 4:18 pm

At least they are starting to assess the models.

The fact that their predictions nearly always lie on one side of the actual observations should be a warning sign that something is seriously wrong with them.

Honestly, any engineer who constructed models which were like that, and then claimed they were useful would be professionally reprimanded.

Amber
June 27, 2018 5:10 pm

No common definition of a climate model ? So how was the” science is settled ” mantra arrived at ?
Oh yeah … because a politician and former tobacco farmer says so .

Sheer lunacy to think climate forecasting is is a cake recipe . How did this $Trillion con game even get this far ? Climate changes and it’s warming . Good .

Gino
June 28, 2018 8:44 pm

If a model relies on parametric coefficients that are adjusted based on the overall model performance and not independent controlled experiments, the model ceases to be a physics model and instead is an exercise in mathematical curve fitting. Curve fitting is only valid between the actual data points the equation is calculated against and has very little predictive skill in extrapolation. See Von Neumans statement regarding parameters.