by Judith Curry
An insightful interview with Bjorn Stevens.
Frank Bosse provided this Google translation of an interview published in Der Spiegel -Print-Issue 13/2019, p. 99-101. March 22, 2019
Excerpts provided below, with some minor editing of the translation.
begin quote:
Global warming forecasts are still surprisingly inaccurate. Supercomputers and artificial intelligence should help. By Johann Grolle
It’s a simple number, but it will determine the fate of this planet. It’s easy to describe, but tricky to calculate. The researchers call them “climate sensitivity”.
It indicates how much the average temperature on Earth warms up when the concentration of greenhouse gases in the atmosphere doubles. Back in the 1970s, it was determined using primitive computer models. The researchers came to the conclusion that their value is likely somewhere between 1.5 and 4.5 degrees.
This result has not changed until today, about 40 years later. And that’s exactly the problem.
The computational power of computers has risen many millions of dollars, but the prediction of global warming is as imprecise as ever. “It is deeply frustrating,” says Bjorn Stevens of the Hamburg Max Planck Institute for Meteorology.
For more than 20 years he has been researching in the field of climate modeling. It is not easy to convey this failure to the public. Stevens wants to be honest, he does not want to cover up any problems. Nevertheless, he does not want people to think that the latest decades of climate research have been in vain.
“The accuracy of the predictions has not improved, but our confidence in them has grown,” he says. The researchers have examined everything that might counteract global warming. “Now we are sure: she is coming.”
As a decision-making aid in the construction of dykes and drainage channels the climate models are unsuitable. “Our computers do not even predict with certainty whether the glaciers in the Alps will increase or decrease,” explains Stevens.
The difficulties he and his fellow researchers face can be summed up in one word: clouds. The mountains of water vapor slowly moving across the sky are the bane of all climate researchers.
First of all, it is the enormous diversity of its manifestations that makes clouds so unpredictable. Each of these types of clouds has a different effect on the climate. And above all: they have a strong effect.
Simulating natural processes in the computer is always particularly sensitive when small causes produce great effects. For no other factor in the climatic events, this is as true as for the clouds. If the fractional coverage of low-level clouds fell by only four percentage points, it would suddenly be two degrees warmer worldwide. The overall temperature effect, which was considered just acceptable in the Paris Agreement, is thus caused by four percentage points of clouds – no wonder that binding predictions are not easy to make.
In addition, the formation of clouds depends heavily on the local conditions. But even the most modern climate models, which indeed map the entire planet, are still blind to such small-scale processes.
Scientists’ model calculations have become more and more complex over the past 50 years, but the principle has remained the same. Researchers are programming the earth as faithfully as possible into their computers and specifying how much the sun shines in which region of the world. Then they look how the temperature on their model earth adjusts itself.
The large-scale climatic events are well represented by climate models.
However, problems are caused by the small-scale details: the air turbulence above the sea surface, for example, or the wake vortices that leave mountains in the passing fronts. Above all, the clouds: The researchers can not evaporate the water in their models, rise and condense, as it does in reality. You have to make do with more or less plausible rules of thumb.
“Parametrization” is the name of the procedure, but the researchers know that, in reality, this is the name of a chronic disease that has affected all of their climate models. Often, different parameterizations deliver drastically divergent results. Arctic temperatures, for example, are sometimes more than ten degrees apart in the various models. This makes any forecast of ice cover seem like mere reading of tea leaves.
“We need a new strategy,” says Stevens. He sees himself as obliged to give better decision support to a society threatened by climate change. “We need new ideas,” says Tapio Schneider from Caltech in Pasadena, California.
The Hamburg Max Planck researcher has therefore turned to another type of cloud, the cumulonimbus. These are mighty thunderclouds, which at times, dark and threatening, rise higher than any mountain range to the edge of the stratosphere.
Although this type of cloud has a comparatively small influence on the average temperature of the earth, Stevens explains. Because they reflect about as much solar radiation into space as they hold on the other hand from the earth radiated heat. But cumulonimbus clouds are also an important climatic factor. Because these clouds transport energy. If their number or their distribution changes, this can contribute to the displacement of large weather systems or entire climatic zones.
Above all, one feature makes Stevens’ powerfully spectacular cumulonimbus clouds interesting: They are dominated by powerful convection currents that swirl generously enough to be predictable for modern supercomputers. The researcher has high hopes for a new generation of climate models that are currently being launched.
While most of its predecessors put a grid with a resolution of about one hundred kilometers over the ground for calculations, these new models have reduced the mesh size to five or even fewer kilometers. To test their reliability, Stevens, together with colleagues in Japan and the US, carried out a first comparison simulation.
It turned out that these models represent the tropical storm systems quite well. It therefore seems that this critical part of the climate change process will be more predictable in the future. However, the simulated period was initially only 40 days. “Stevens knows that to portray climate change, he has to run the models for 40 years. Until then it is still a long way.
Stevens, meanwhile, rather fears that it is the cumulonimbus clouds that could unexpectedly cause surprises. Tropical storm systems are notorious for their unpredictability. “The monsoon, for example, could be prone to sudden changes,” he says.
It is possible that the calculations of the fine-mesh computer models allowed to predict such climate surprises early. “But it is also conceivable that there are basically unpredictable climatic phenomena,” says Stevens. “Then we can still simulate so exactly and still not come to any reliable predictions.”
That’s the worst of all possibilities. Because then mankind continues to steer into the unknown.
end quote.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
It is only predictions of weather and climate in the future, that are difficult.
Stop trying to do that, and the problem evaporates.
It is the future, and has not happened yet!
Oh no! Please do some decent translation!
“The computational power of computers has risen many millions of dollars,”
“Now we are sure: she is coming.”
No doubt some good science here, but this made me stop reading:
“It’s a simple number, but it will determine the fate of this planet. ”
So, he climate sensibility will decide the “fate” of the planet ?What does that mean
? Will it self-destruct at +4,5?
Climate predictions are not difficult they are impossible and all climate models are worthless shit. Climate is not global, it is regional, and vast regions of the world have no data at all. Keep in mind the issue is not global temperature. The issue is global mean science. Can we pick on thing from the planet to represent everything else of that type? For example, is the rock in my front yard I picked up the global mean rock to define the geology of the world? Are my opinions global mean opinions so if you want to know anything, just ask me? Can we put all science of the world, one global mean thing for everything in a room and just study that? Do we have a global mean mammal? I think my dog fits that, and for a small fee you can pet him and make observations as he represents all mammals of the world. So why don’t we have this type of global mean science? Would it not make our science more efficient? Well, the problem with all global mean things is that we end up making a construct that is not real or that is so generalized it does not have the ability to answer any interesting questions. So until such time as the UN recognizes my opinions as global mean opinions where the opinions of all other humans on earth is recognized as noise, then I am not into global mean climate things like temperature.
“Global warming forecasts are still surprisingly inaccurate. Supercomputers and artificial intelligence should help. By Johann Grolle”
Read: Global warming forecasts that ever end up being competent would be so amazing and so startling, we should declare a national holiday to recognize its existence when it occurs. Global mean things are not real things and as such we cannot forecast them. We cannot even forecast real things.
“…Nevertheless, he does not want people to think that the latest decades of climate research have been in vain…”
He can WANT that but seems pretty sullen about the reality of it.
I think part of the problem is that they are trying to model and make predictions (or projections) for the entire globe, which consists of all possible climates and all possible variables. Why not start by trying to model the climate of a small area with fewer variables like the city of Los Angeles, with it’s limited historic temperature range and limited weather possibilities. If you can successfully model the future of a small area, then you can model more areas, and maybe, eventually model the whole planet. I don’t think one mathematical formula will work for every place on Earth. Right now, nobody can tell me if Los Angeles is going to get wetter/drier/windier/calmer/colder/hotter/or anything that would actually be useful to know. And after all, if you can’t predict little things in the near future, I certainly don’t think you can predict huge things in the far distant future. Climate is regional, not global.
IIt should be apparent to anyone paying attention that the existing GCMs (except possibly for the Russian one) are hopelessly faulty as demonstrated by their failure to predict average global temperature.
Some things I know:
(1) Assuming that CO2 is the ‘control knob’ on climate is wrong.
(2) Multiple compelling evidence from paleo to present has demonstrated that CO2, in spite of being a ghg, has little, if any, effect on climate.
(3) There are many factors contributing to climate but nearly all of them are insignificant and can be ignored.
(4) Considering only three factors; an approximation of the net effect of ocean cycles, the time integral of SSN anomalies, and TPW (which is the absolute water vapor content of the atmosphere) has resulted in a 98+% match to measured average global temperature 1895 thru 2018.
(5) Natural turbulence/roiling of the ocean and atmosphere produces random fluctuation of reported temperatures and this needs to be smoothed out.
(6) The ‘notches’ in TOA and intermediate graphs of radiation flux vs wavenumber demonstrate that much (about 40%) of the radiation energy absorbed by ghg other than water vapor is made available to WV molecules by thermalization and emitted to space by WV.
Some things I suspect:
(1) Water vapor in the atmosphere is calculated somehow using the Clausius-/Clapeyron equation. (CC only applies at saturation.)
(2) Indirect solar influence on average global temperature as quantified by SSN is ignored.
(3) Cloud effects are simulated using human input parameters.
(4) The models assume that CO2 drives temperature.
What do the Russians do differently which results in a good match with measured?
http://icecap.us/images/uploads/Screen_Shot_2019-03-01_at_7.36.05_AM.png
I am not sure why people believe that computer models know something that humans don’t. The model is a mathematical representation of human understanding. It doesn’t know anything.
Computer models are useful tools for many reasons, including the testing of a hypothesis and identifying our ignorance quicker, but that only works if the humans are willing to admit that they are ignorant.
Indeed, the climate models have been very good at showing us that the hypothesis of a CO2 driven climate is wrong, but they are not coming up with a better hypothesis on their own. The model only does what it is told to do. It knows nothing!
AI: Since AI unites are starting to “think” for themselves one has to wonder what they would say about CO2 induces global warming?
“It is very hard to predict, especially the future.” – Niels Bohr
“The future, like everything else, is no longer quite what it used to be.” – Paul Valery
They really aren’t.
Almanacs used to be pretty damned reliable.
“If the fractional coverage of low-level clouds fell by only four percentage points, it would suddenly be two degrees warmer worldwide.”
Nonsense.
___________________________________________________
After “more than 20 years [ you have ] been researching in the field of climate modeling”
can you report a day / hour / minute / second / when “suddenly [ it was ] two degrees warmer WORLDWIDE?”
And report the catastrophic event that heated the atmosphere 2 degrees warmer WORLDWIDE without loosing energy at TOA.
During that ongoing forcing of energy into the atmosphere WORLDWIDE.