New "simplified" Russian climate model promises faster results

From KAZAN FEDERAL UNIVERSITY and the “Russian Collusion Department”

First results were published in Geoscientific Model Development.

Kinetic energy of winds at 5 km altitude in December – February (Aeolus calculations)

Professor Aleksey Eliseev, Chief Research Associate at Kazan University’s Near Space Research Lab, comments,

“To find solutions for some tasks in climate research, we need calculations for hundreds, thousands, or even millions of years. Such tasks are, for example, ice age periodization. Another group of tasks that requires huge longitudinal calculations is climate forecasting, a type of research where we don’t have definitive information about coefficients of used models.

“If we use models of the general circulation of atmosphere, then required calculations can take up months or years with the use of the most advanced modern computers. To accelerate research, scientists use simplified models – the so-called climate models of intermediate complexity. In Russia, the only such model has been created by the Institute of Atmospheric Physics.

“Our team, comprising employees of Potsdam Institute for Climate Impact Research, Moscow State University, Kazan Federal University, and the Institute of Atmospheric Physics, is working on one such model. We called it the Potsdam Earth System Model.”

Currently, one of the components of POEM, called Aeolus, is ready for use. Two parts of the model, for large-scale zonal-mean winds and planetary waves, have been designed by Dr. Eliseev. He has also partaken in the creation of automatic tuning process for model parameters.

###

The paper:

The dynamical core of the Aeolus 1.0 statistical–dynamical atmosphere model: validation and parameter optimization

https://www.geosci-model-dev.net/11/665/2018/

Abstract:

We present and validate a set of equations for representing the atmosphere’s large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical–dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.

Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.

We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983–2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model’s performance in reproducing the seasonal cycle and the influence of the El Niño–Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.

With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model’s representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.

The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.

0 0 votes
Article Rating
53 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Pop Piasa
March 26, 2018 7:55 am

So, if they only put in half the data they get the same results twice as fast?

John harmsworth
Reply to  Pop Piasa
March 26, 2018 8:16 am

Just a guess-4 times as fast?

Greg
Reply to  John harmsworth
March 26, 2018 1:17 pm

This is a totally unwarranted interference western climate politics all countries should consider expulsion of any Russian diplomats they still have left.

rocketscientist
Reply to  Pop Piasa
March 26, 2018 8:45 am

Which half of the data?

Reply to  rocketscientist
March 26, 2018 9:28 am

Which half do you want?

Ricdre
Reply to  rocketscientist
March 26, 2018 9:40 am

It doesn’t matter. Any data will do since the models are tuned to produce the expected result.

Reply to  rocketscientist
March 26, 2018 10:05 am

And why stop at half? Less is more!
I propose Xeno’s dataset.

Pop Piasa
Reply to  rocketscientist
March 26, 2018 3:03 pm

Ricdre, I wonder if you misunderstand. The data are selected carefully in order to tune the models. Just like the jolly green giant handpicks each pea in the can.

Phil R
Reply to  rocketscientist
March 26, 2018 4:45 pm

Max Photon,

And why stop at half? Less is more!

Is this the beginning of homeopathic climate modeling? 🙂

Carl Friis-Hansen
March 26, 2018 8:05 am

“He has also partaken in the creation of automatic tuning process for model parameters.”
Will read the whole story later; but the above statement confuses me, to say at least. Does that mean that they add to the program a procedure to make the parameters fit the expected output? Would that be useful?

Mike McMillan
Reply to  Carl Friis-Hansen
March 26, 2018 3:07 pm

Somewhat useful, but a much bigger advance would be the automatic tuning of past data.

Kristi Silber
Reply to  Carl Friis-Hansen
March 26, 2018 6:18 pm

Carl Friis-Hansen,
This is not the first time automatic tuning has been used. An algorithm is created to tune the model. This can be useful in the sense that it is objective (theoretically, depending on the algorithm), but there is the risk of “overtuning,” which compromises the model. Tuning is NEVER used to make parameters fit “expected output” if you mean future projections. Tuning is basically to make the model stable and so that it doesn’t go off the track of reality, producing really wild, unrealistic results.
This is a very good overview of tuning. Note that it is an erroneous assumption that models are all developed to fit 20th C conditions – “The question of whether the twentieth-century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35% of modelers stating that twentieth-century warming was rated very important to decisive, whereas 30% would not consider it at all during development. Some view the temperature record as an independent evaluation dataset not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming. The question of developing toward the twentieth-century warming therefore is an area of vigorous debate within the community.” This also makes suspect the claim that the whole field is subject to “groupthink.”
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1
” Tuning can be described as an optimization step and follows a scientific approach. Tuning can provide important insights on climate mechanisms and model uncertainties. Some biases in climate models can be reduced or removed by tuning, while others remain stubbornly resistant. It is important to understand why if we want to improve models. ”
This describes the process of real-life tuning of 6 models (I haven’t read the whole paper):
https://www.geosci-model-dev.net/10/3207/2017/gmd-10-3207-2017.pdf

Reply to  Kristi Silber
March 26, 2018 6:51 pm

Kristi, you are missing what is the most prevalent type of tuning, which I call “evolutionary tuning” .
There are many statements by modelers that say that various parameters of the models are tuned. Here’s Gavin Schmidt discussing the GISS Model E GCM, as an example:

The model is tuned (using the threshold relative humidity
Uoofor the initiation of ice and water clouds) to
be in global radiative balance (i.e., net radiation at
TOA within ±0.5 W m2 of zero) and a reasonable
planetary albedo (between 29% and 31%) for the control
run simulations.

Note that this is not tuning some trivial part of the model. Without this tuning the modeled system is not energetically balanced, either losing or gaining more energy than is impinging on the system. Without this tuning the model would spiral into either heat death or snowball … which should be enough to toss the model out right there, because the real world is nowhere near that sensitive.
As to being tuned to the historical data, all the models are tuned to it, but in an “evolutionary” rather than a direct fashion. By that I mean a model is built, and the only way we have to test it is to compare it to historical data. If a given change in the model makes it fit that historical data better, the change is retained in future incarnations of the model, while changes that make the historical fit worse are discarded. After many, many iterations, eventually we end up with a model that reproduces historical data in some fashion.
This evolutionary tuning has several effects. First, it means that the fit to historical data is meaningless, because that’s what the models are tuned to fit.
Second, it means that the IPCC argument that “the models can’t replicate the historical data without the anthropogenic forcings” is also meaningless. If you tune a model with a group of forcings, removing any one or more of them will result in a worse fit, duh.
Third, it means that the aspects of the climate for which the model is not trained will often disagree greatly when compared with historical data. See precipitation as an example.
Fourth, and crucially, this evolutionary tuning means that regardless of the forcings used as model input, a model can likely be found that reproduces the historical data. Or as Kiehl put it in his groundbreaking paper,

One curious aspect of this result is that it is also well
known [Houghton et al., 2001] that the same models that
agree in simulating the anomaly in surface air temperature
differ significantly in their predicted climate sensitivity. The
cited range in climate sensitivity from a wide collection of
models is usually 1.5 to 4.5C for a doubling of CO2, where
most global climate models used for climate change studies
vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of
2 to 3 in their climate sensitivity, how can they all simulate
the global temperature record with a reasonable degree of
accuracy.

And a good question it is. The answer is that the so-called “climate sensitivity” diagnosed by the climate models is merely a function of the size of the forcings … and that, of course, puts the lie to the idea that the models are “physics based”. Well, I guess they are “physics based”, but only in the same sense that a Hollywood blockbuster is “based on a true story”.

Kristi Silber
Reply to  Kristi Silber
March 27, 2018 1:40 pm

Willis, while I may not have discussed tuning in detail, that doesn’t mean I’m not aware of the fact that it is both powerful and can be abused.
I’m not an expert, I readily admit. I may have things wrong, but it seems to me not everyone tunes their model as you suggest. What do you mean exactly by “historical data” – the whole industrial age?
Willis: “As to being tuned to the historical data, all the models are tuned to it, but in an “evolutionary” rather than a direct fashion. By that I mean a model is built, and the only way we have to test it is to compare it to historical data. If a given change in the model makes it fit that historical data better, the change is retained in future incarnations of the model, while changes that make the historical fit worse are discarded. After many, many iterations, eventually we end up with a model that reproduces historical data in some fashion.”
My impression is that this isn’t always the case, or not to the whole historical record, anyway.
“Notably, there is not any obvious consensus in
the modeling community as to the extent to which parameter
choices should be guided by conforming to process-level
knowledge as opposed to optimizing emergent behaviors in
climate models. At many centers, the philosophy for the most
part has been to tune parameters in ways that make physical
sense, with the expectation that in the long run that should be
the best strategy. Increasing skill in climate models over time
does support this approach (Reichler and Kim, 2008).”
https://journals.ametsoc.org/doi/full/10.1175/BAMS-D-15-00135.1#
Just because a model is tuned to a period in the past, it isn’t necessarily tested (validated) against that same stretch of past. Some models are tuned to or validated against prehistoric or pre-industrial data. One of the models in the Schmitt paper talks about using only a 10 years stretch for tuning. What if you used 2007-2017 for that, then tested the model against the whole 20th C – would that be valid, do you think?
Willis: “Second, it means that the IPCC argument that “the models can’t replicate the historical data without the anthropogenic forcings” is also meaningless. If you tune a model with a group of forcings, removing any one or more of them will result in a worse fit, duh.”
It’s not meaningless if there’s no way to produce the historical past without anthropogenic forcing, that means that whatever tuning you use, whatever other parameters you play with, the model won’t do it. That shows that the recent past is extraordinary event, outside natural climate variation. Do you know for a fact that these researchers are mindlessly employing obvious circular reasoning and methods? I haven’t seen that evidence.
“The CM3 coupled model was initialized from present-day
ocean conditions and allowed to adjust to a preindustrial,
quasi-steady state with a small TOA energy imbalance (0.2–
0.3 W m−2). Tuning in CM3 was concentrated in the atmospheric
component AM3. Outside of the atmospheric component,
sea ice, land, and vegetation albedo, along with snow
masking, were tuned. These tunings improved the Atlantic
Meridional Overturning Circulation in preliminary coupled
configurations, prior to final tuning of the atmospheric component
and subsequent initiation of the preindustrial coupled
control.”
https://www.geosci-model-dev.net/10/3207/2017/gmd-10-3207-2017.pdf
Sounds to me as if this was tuned to pre-industrial times, no?
Willis: ” The answer is that the so-called “climate sensitivity” diagnosed by the climate models is merely a function of the size of the forcings … and that, of course, puts the lie to the idea that the models are “physics based”.
I don’t understand this, I’m afraid. Why would that put a lie to anything? The models can be physics-based and still be imprecise and uncertain and variable. The models will never be totally accurate representations of the future, that’s expecting too much. But the combination of models tells us more than the sum of the parts. I think that’s something that’s not appreciated enough; I only realized it recently. Models are diverse, with their own foibles, and many of them are known by the modelers themselves. They also have their strengths. The IPCC has begun to take this into account, as I understand it. Then there are the means and ranges from different models that overlap. (I wish I’d read the relevant section of the IPCC, but don’t have time now. There’s a chart posted on WUWT recently, but don’t have time to find that now, either.)
“It is important to note that the change in radiative flux is calculated and not prescribed in the fully coupled climate models.” (Kiehl, 2007)
If you tuned a model to the 20th C and it replicated conditions, then ran it using experimental data in which CO2 didn’t rise, and in that lost the increase in temp, etc., wouldn’t that show the same thing as doing the opposite? It is still changing only one variable. Only one state can ever have a reality to compare it with, and it would at least be one way of illustrating that CO2 affects temperature – if one can accept a model as any evidence at all.
No one says models are perfect. But there is absolutely no way to try to predict anything without using a model of some sort, and there is no way to run the type of experiment people think of when they think of “science” (double blind, replicated, controlled) because we only have one Earth. More realism leads to more complexity and more sources of uncertainty. That’s just the way it is. The models are improving, as is the transparency of how its done. The scientific community is very aware of the issues. Sometimes it takes a while, but there is change. Models a imperfect tools, but the doesn’t make them useless.

Kristi Silber
Reply to  Kristi Silber
March 27, 2018 1:47 pm

Willis: “Without this tuning the modeled system is not energetically balanced, either losing or gaining more energy than is impinging on the system. Without this tuning the model would spiral into either heat death or snowball … which should be enough to toss the model out right there, because the real world is nowhere near that sensitive.”
I think whether the “real world is that sensitive” depends on the parameter being adjusted. Energetic balance is extremely important for the real system, too. Why should that mean tossing out a model?

Kristi Silber
Reply to  Kristi Silber
March 27, 2018 10:55 pm
March 26, 2018 8:05 am

Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong.
~ Mencken

Allen Duffy
Reply to  Rob Dawg
March 26, 2018 8:18 am

And the answer is …… 42 (h/t Douglas Adams)

rbabcock
March 26, 2018 8:07 am

I love the Russians. Centuries of brutal cold weather have left them with a simple, stoic attitude on just about everything.
With a history of great physicists and a legacy of not a lot of resources, they no doubt took a hard look at the real data and formulated something made out of common sense. Whether it is right or not remains to be seen, but my guess it has at least as good a chance as what we are seeing now coming out of the US and Europe.
The issue with complex models is, if something is wrong at the beginning, it just gets amplified downstream. And with the advent of supercomputers, it just amplifies the errors faster.

John harmsworth
March 26, 2018 8:20 am

Simple works for me. Since climate is non linear and chaotic, adding more estimated and/or poorly understood inputs just accelerates and magnifies the divergence from reality.
This is a “feature” of the Warmist devotion to models. Much like the idiotic peculiarities and difficulties of working with early editions of Microsoft were “features”.
They serve a purpose of the creators, not of the clientele.

michael hart
Reply to  John harmsworth
March 26, 2018 8:29 am

Yes. There are other scientific endeavors which are known, or at least suspected, to be computationally intractable. But the practitioners still forever press for more money and computing power to, quite truthfully, make it better than it currently is, but while still ignoring such basic criticisms. Ultimately the real improvements, if any, will have to come from better understanding, not a bigger ‘box’ which just allows them to strut and preen in front of their peers.

JohnWho
March 26, 2018 8:29 am

What’s wrong with getting the wrong information faster?

Tom in Florida
Reply to  JohnWho
March 26, 2018 8:57 am

Or is it better to get the fast information wronger.

Joe Bastardi
March 26, 2018 8:32 am

Interesting, yet another idol, oops model, being offered up for worship

Pop Piasa
Reply to  Joe Bastardi
March 26, 2018 3:23 pm

Surely among the Russians is a man like yourself who values past weather clips as guides to upcoming scenes in the ongoing weather/climate movie, Joe. Incorporating analog years might simplify their models, or is that already done?

markl
March 26, 2018 8:37 am

And the results are?

ResourceGuy
March 26, 2018 8:48 am

Does it also produce nerve gas as a byproduct?

Dave
March 26, 2018 8:57 am

It’s worse than we thought.

March 26, 2018 9:02 am

From the article, “He has also partaken in the creation of automatic tuning process for model parameters.
Another engineering model, with no capacity to reliably predict past the calibration bounds. None of these people have any concept of science or physical meaning.

Carl Friis-Hansen
Reply to  Pat Frank
March 26, 2018 9:07 am

That was what I thought!

ResourceGuy
Reply to  Pat Frank
March 26, 2018 9:22 am

+10

Alasdair
March 26, 2018 9:32 am

CC – Calculating Chaos. A fool’s errand? But hang on – Very lucrative.

Editor
March 26, 2018 10:09 am

The real beauty of computers is that although humans are limited by human brain processing ability, computers allow us to make human mistakes at a truly incredible rate of speed …
w.

David E. Hein
Reply to  Willis Eschenbach
March 26, 2018 10:20 am

about 30 mins faster than the paramedics.

Walter Sobchak
Reply to  Willis Eschenbach
March 26, 2018 3:33 pm

To err is human. To really foul things up requires the assistance of computers.

Richmond
Reply to  Willis Eschenbach
March 26, 2018 4:45 pm

A true believer in Catastrophic Anthropogenic Global Warming tried to convince me that the fastest computers said it was true. I replied that garbage in equals garbage out, and that garbage at the speed of light might be relativistic garbage, it is still garbage. Human mistakes at the speed of light is impressive, but still wrong. Your observation about the beauty of computers is spot on.

March 26, 2018 10:59 am

When earlier climate models already are simplified, simplify more …

MarkW
March 26, 2018 11:17 am

We can now get the wrong answer in half the time.

Reply to  MarkW
March 26, 2018 12:30 pm

Pretty much the bottom line. At least its mostly wasted Russian money.

MarkW
Reply to  ristvan
March 26, 2018 12:36 pm

Since you are only using half as much super computer time, they are wasting only half as much money for each run.

MarkW
Reply to  ristvan
March 26, 2018 12:36 pm

They should be able to earn back the money spent on the models in nothing flat.

Chimp
Reply to  ristvan
March 26, 2018 12:37 pm

They can use the computer time saved in order to mine Bitcoins.

Walter Sobchak
Reply to  ristvan
March 26, 2018 3:34 pm

Chimp: Rimshot.

Tom O
March 26, 2018 11:37 am

It is my expectation that all the knowledgeable put downs come from people that bothered to read the paper, not just the article, correct? No? Kind of what I thought. Knock it, laugh at it, call it stupid, but for God sake, don’t take the time to read it.
Models of chaotic systems, at best, are simulations, not models, and this approach of trying to make a simpler model that approximates reality makes far more sense than creating the “big box” monsters that can’t get close because there are too many assumptions instead of understandings involved in their creation. A simpler model, when it gets close, can have greater complexity added to see what happens.
But hey, put them down as just another greedy bunch instead of potentially actually trying to do something in earnest, especially based on a half dozen paragraphs of introduction and a page of abstract. You have no idea what they were attempting to do other than their clear statement that the model output didn’t match with the southern hemisphere wind patterns, which says they are looking for something that matches current patterns first.
Nope. Shoot them down and then try to one up each other. I never saw the obligatory “global warming or climate change” comment made in the article, so why do you put them in the same category as Mann and company?

Alan Tomalty
Reply to  Tom O
March 26, 2018 12:18 pm

“We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983–2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction.”
I hope you realize that the ERA model referred to above is a specialized GCM that takes satellite measurements and then calculates such things as soil moisture, precipitation, air temperatures, surface temperatures….etc and then runs these calculated data sets through its own internal GCM to arrive at another data set. That is why they call it reanalysis. To quote from the ERA website even they admit to 3 huge errors 1) Tropical moisture larger than observed from 1991 onwards 2)Precipitation greatly exceeds evaporation 3) Spurious Arctic temperature trends. So any GCM that uses ERA datasets is using a worthless data set. Why does every GCM use it then? Because no one wants to spend the money collecting soil samples from all over the world. What a mess!!!!! So what these guys are doing are throwing out all the complexity of GCMs and replacing it with a simple model that uses the same bad data. Pat Frank has a simple model that does the same thing. He will even give it to any climate scientist that wants to use it. It produces just as good results as the ones that run on any supercomputer. However without good data you are whistling in the wind. However even if you had perfect data sets of millions of soil samples and precipitation rates…….etc YOU WOULD STILL HAVE THE CLOUD ERROR that no one can model properly and that never will be modeled properly. What has science come to?

Graham
March 26, 2018 11:58 am

So it’s fraud-while-you-wait now.

Luther Bl't
March 26, 2018 12:58 pm

And now, time for my visit to No Tricks Zone.

March 26, 2018 1:38 pm

Isn’t the Potsdam group among Europe’s most radical alarmists?

Pop Piasa
March 26, 2018 4:17 pm

Maybe the Potsdam group is suffering from pot’s damn paranoia.

Jon
March 26, 2018 7:05 pm

Greg March 26, 2018 at 1:17 pm
This is a totally unwarranted interference western climate politics all countries should consider expulsion of any Russian diplomats they still have left.
Right on Greg. Also, those pesky Russians must have originally befuddled the IPCC by hacking the results to create this CO2 crap. No way would our Glorious Masters lie to us!

Jon
March 26, 2018 7:06 pm

Greg March 26, 2018 at 1:17 pm
This is a totally unwarranted interference western climate politics all countries should consider expulsion of any Russian diplomats they still have left.
Right on Greg. Also, those pesky Russians must have originally befuddled the IPCC by hacking the results to create this CO2 crap. No way would our Glorious Masters lie to us!

March 27, 2018 7:15 am

We don’t want to be found colluding w/any swanky Russian models….

Toto
March 28, 2018 10:32 pm

Building models is confused with science in this age of computers, especially if they leave out the testing part. That’s a general comment; I have no comment on this particular model.
Here is a black swan avalanche. It might wipe out your mental model of an avalanche. It’s Russian. It’s rather pushy. As far as I know, Putin had nothing to do with it.
https://www.facebook.com/MeteoReporterStorm/videos/2000919363561372/