From the “we’re gonna need a bigger computer” department.
Climate model uncertainties ripe to be squeezed
The latest climate models and observations offer unprecedented opportunities to reduce the remaining uncertainties in future climate change, according to a new study.
Although the human impact of recent climate change is now clear, future climate change depends on how much additional greenhouse gas is emitted by humanity and also how sensitive the Earth System is to those emissions.
Reducing uncertainty in the sensitivity of the climate to carbon dioxide emissions is necessary to work-out how much needs to be done to reduce the risk of dangerous climate change, and to meet international climate targets.
The study, which emerged from an intense workshop at the Aspen Global Change Institute in August 2017, explains how new evaluation tools will enable a more complete comparison of models to ground-based and satellite measurements.
Produced by a team of 29 international authors, the study is published in Nature Climate Change.
Lead author Veronika Eyring, of DLR in Germany,said:
“We decided to convene a workshop at the AGCI to discuss how we can make the most of these new opportunties [sic] to take climate model evaluation to the next level”.
The agenda laid-out includes plans to make the increasing number of global climate models which are being developed worldwide, more than the sum of the parts.
One promising approach involves using all the models together to find relationships between the climate variations being observed now and future climate change.
“When considered together, the latest models and observations can significantly reduce uncertainties in key aspects of future climate change”, said workshop co-organiser Professor Peter Cox of the University of Exeter in the UK.
The new paper is motivated by a need to rapidly increase the speed of progress in dealing with climate change. It is now clear that humanity needs to reduce emissions of carbon dioxide very rapidly to avoid crashing through the global warming limits of 1.5oC and 2oC set out in the Paris agreement.
However, adapting to the climate changes that we will experience requires much more detailed information at the regional scale.
“The pieces are now in place for us to make progress on that challenging scientific problem”, explained Veronika Eyring.
From the University of Exeter via press release
The paper: https://www.nature.com/articles/s41558-018-0355-y
Taking climate model evaluation to the next level
Abstract
Earth system models are complex and represent a large number of processes, resulting in a persistent spread across climate projections for a given future scenario. Owing to different model performances against observations and the lack of independence among models, there is now evidence that giving equal weight to each available model projection is suboptimal. This Perspective discusses newly developed tools that facilitate a more rapid and comprehensive evaluation of model simulations with observations, process-based emergent constraints that are a promising way to focus evaluation on the observations most relevant to climate projections, and advanced methods for model weighting. These approaches are needed to distil [sic] the most credible information on regional climate changes, impacts, and risks for stakeholders and policy-makers.

Fig. 1: Annual mean SST error from the CMIP5 multi-model ensemble.
Fig. 2: Schematic diagram of the workflow for CMIP Evaluation Tools running alongside the ESGF.
Fig. 3: Examples of newly developed physical and biogeochemical emergent constraints since the AR5.
Fig. 4: Model skill and independence weights for CMIP5 models evaluated over the contiguous United States/Canada domain.



More computing power to calculateepicycles does not help. You need to have better understanding of the weather phenomenons. For that you need more and more accurate observations to constraint the models with all of the variables. Calculating surface temperatures is just a tiny part of the problem.
Known phenomenon are not enough in a complex adaptive systems. Plate tectonics, volcanic activity, sun, solar system and beyond may still surprise. Long term predictions will be never possible, but modelling might be useful especially in understanding local weather.
Instead of trying to improve the predictive ability of models by combining all the failed ones, doesn’t it make better sense to investigate what the Russian model does differently, producing predictions that are validated by the measurements?
They will need many meetings and work groups and publications to effectively demote the satellite data and promote biased ground station data and other measurement flaws.
I thought they had already reduced the uncertainty in their models by adjusting the observed and historical temperature data via a process they call “homogenization”. That way reality matches their simulations.
/snark
Gotta love a graph that uses a skill measurement. I remember reading some review by MM years back that must have used the word skill or skillful several times and I just wanted to go take an antacid.
toujours ecobolleaux
Right on-ez!!
Any paper which starts,
“Although the human impact of recent climate change is now clear,”
Causes me to hit the delete key. It warns there is nothing but agenda based crap to follow.
. . . but doing so insures publication 97% of the time.
ABSTRACT:
Earth system models are complex beyond our understanding but we keep tweaking them randomly because that’s what we are paid to do.
“New study attempts to introduce the narrative that uncertainties have been eliminated’.
Regarding “Although the human impact of recent climate change is now clear”… when did this happen? Must have missed the research paper, and supporting confirmation studies.
Cargo cultists are building newer and bigger runways and associated bamboo control towers, manned with natives jabbering into coconut shells.
By all accounts according to their new estimates, those darn cargo planes, filled with wonderous goods, will be landing any day now
No doubt the patent for all these items will go to The Professor from Gilligan’s Island.
“Although the human impact of recent climate change is now clear” — Exactly WHEN are we going to see some QUANTIFICATION of that Statement? Until that happens, all the rest is so much Propaganda and Cow Manure!
“Produced by a team of 29 international authors”
The quality of any academic paper is inversely proportional to the square of the number of authors. So with 29 authors, you know it is just a gigantic load of bovine excrement.
A tacit admission that all previous models are junk.
In four years, their new super ‘next level’ model will be revealed to be junk, with the introduction of their new, superduper Stage 3 Climate Model.
Their lack of self awareness is amusing.
It seems that these “researchers” think that if all the errors are averaged out they will get a more reliable forecast.
This seems familiar but in a different field.
The 2008 Crash was the worst since the early 1930s.
Within this were contrived instruments called “CDOs” –Collateralized Debt Obligations.
That could only be created with a lot of computer power. Many such obligations were “bundled” into securities.
In a boom investors abandon caution and seek a higher return. Taking on risk. To diminish concerns about the latter the “bundles” of junk-rated stuff were declared that altogether they would average out as investment grade.
Not so.
The Crash was Mother Nature’s way of dealing with delusions in the financial world.
Mother Nature and Mister Margin right now are messing up theories behind economic intrusion.
She is about to do it “big time” to the world of climate modelling.
Where is Mr. Whipple when you need him?
The modelers could begin by accurately putting the effects of clouds into the models.
I liked the title of one of the references – “The Art and Science of Climate Model Tuning.”
https://journals.ametsoc.org/doi/10.1175/BAMS-D-15-00135.1
Straight out of the abstract: “…Tuning is an essential aspect of climate modeling with its own scientific issues, which is probably not advertised enough outside the community of model developers…”
Indeed it is “probably not advertised enough” lol.
Many thanks, that’s a very interesting and useful paper. Another quote:
” Why such a lack of transparency? This may be because tuning is often seen as an unavoidable but dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature. There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.”
This paper does confirm my worst suspicions about climate models.
I have defined what I called a “pure” climate model. A pure climate model has these three elements:
1. The initial conditions.
2. The physical laws.
3. Absolutely nothing else.
As the paper makes clear, pure climate models are completely impossible. As even the IPCC has admitted, it is almost certainly impossible to make long-term climate forecasts due to its chaotic nature. But, putting that aside, a pure climate model would require computers perhaps trillions of times more powerful than anything we have today.
It seems pretty clear: tuning is basically a sophisticated form of curve fitting. This makes them pretty good at forecasting the past. But it means they are useless at forecasting the future. There’s only one way to test a model’s forecast skills: run the model and then wait 30 years to see how good the forecast was. Of course, we’ve done that: we’ve had modern supercomputer forecasts since the 80’s. And they failed badly, forecasting 2 or 3 times more warming than actually happened. Quite possibly a model that forecast precisely zero warming would have been more accurate.
The paper said: ” Tuning may be seen indeed as an unspeakable way to compensate for model errors.”
I see it differently. I see tuning as an unspeakable way to fool the world’s gullible politicians and cause them to squander trillions of dollars trying incompetently to solve a problem that almost certainly does not exist.
Chris
MJ, I cannot speak to the Art side of the issue, but as for the Science and Engineering side, “tuning” has a long and distinguished career, being instead fondly referred to as introducing a “fudge factor.” According to Wikipedia: “A fudge factor is an ad hoc quantity or element introduced into a calculation, formula or model in order to make it fit observations or expectations. Examples include Einstein’s Cosmological Constant, dark energy, the initial proposals of dark matter and inflation.”
Interestingly, https://www.dictionary.com/browse/fudging has this to say about the origin and history of the word “fudge”: “verb, put together clumsily or dishonestly, 1610s, perhaps an alteration of fadge “make suit, fit” (1570s), of unknown origin. As an interjection meaning “lies, nonsense” from 1766; the noun meaning “nonsense” is 1791. It could be a natural extension from the verb.
But “tuning” sounds so much more professional, doesn’t it; as in tuning a piano to obtain near perfect pitch.
If you don’t know how sensitive the Earth is to greenhouse gases emitted by humans, than the human impact is far from settled or clear. Another case of a logical fallacy from the science is settled crowd.
I know perfectly well what is needed to squeeze out uncertainty from models. I’ve worked a lot with computer modelling (admittedly of much simpler processes than climate), and no model with three or more parameterized variables is any good. The number of possible ways to get a particular result gets too big to get any sensible information out of the model (well not always, if you know the parameterized data is correct within very tight limits, and can afford lots and lots of computer time you can have a few more parameterized variables).
So, get rid of the parameterizations. Calculate everything from basic physics. In some cases (e g cloud microphysics) that will mean a lot of hard, difficult basic research before you can start modelling. In other cases (e g convection) the physics is well understood, it just needs a computer a few billions or a few trillion times faster to do the physics on a realistic space and time scale. Plus initialization data vastly more detailed than available at present. Oh, and you might also have to solve a number of classic unsolved mathematical problems, like finding a general solution for Navier-Stokes equations.
So, go for it boys!
What does climate modelling rest upon? An assumption. What does the assumption rest upon? Another assumption. In fact, it is assumptions all the way down!
The primary difference between this Aspen con/fab and nerdy college kids gathering to play D & D, is that the college kids know their game-world is not real.
At least they seem to recognize that there has been something wrong with the plethora of models they have been using. The reality is that the climate change we have been experiencing is caused by the sun and the oceans over which mankind has no control. Despite the hype, there is no real evidence that CO2 has any effect on climate and there is plenty of scientific rationale to support the idea that the climate sensitivity of CO2 is zero. So they can improve their models by taking out all code that assumes that more CO2 causes warming and eliminate all the models that have come up with wrong results. Instead of a plethora of models they should have just a single model with no fudge factors that adequately predicted the end of the end of the 20th century warming cycle and the current global warming pause.
There is a warning one encounters in virtually every financial market, trading desk, etc.;
“Warning! Past performance is not an indication of future performance!” “Traders and investors buy and sell at their own risk.”
These fools are weakly admitting their models are trash, but claiming they can fix their models by attending “intense workshop at the Aspen Global Change Institute in August 2017”.
Utter silliness!
Oh!
Lovely, they want to add complexity.
A bandaid and lipstick for broken climate models, and excuses for all the alarmist faithful.
Billions and likely trillions of dollars over thirty years and these fools have still not worked out a way or method to test CO₂’s actual influence on atmospheric temperatures. Let alone determine that influence at all locations, altitudes, land or ocean throughout the world.
There’s the lipstick!
Defund them!
Demote any USA researcher who attended via government or taxpayer funds!
ATheoK, I admire the chutzpah of Professor Cox for use of the ambiguous word “can” instead of the definitive word “will.”
Then again, many professors living largely in a university environment get burned when trying to predict the future of the real world.
BTW, the sentence attributed to Prof. Cox is actually nonsensical as written. The ending portion should have been stated as “. . . uncertainties in key aspects of PREDICTIONS OF future climate change”. That is, models and observations cannot in any way affect key aspects of future climate.
Almost agree
Agree!
Agree!
Back in prehistoric HS, English teachers spent inexorable amounts of time drumming into our heads definitive word meanings and ‘when’ said word use was apropos.
One of these word-pair conundrums was “will/shall”; including repeated references to General Douglas “I shall return” MacArthur.
The upshot of that lesson from hades is that “will” is nebulous and “shall” is definitive.
Yes, “will” is a much much firmer word than “can”, but not a definitive.
Anybody who thinks that Warmist Alarmism has ever used high power computers in an efficient manner to do ANYTHING needs to be sterilized and denied children.
100% of the crap they keep showing us can be done on Lotus 1-2-3 with a 486 SX.
Hansen did it better in the 1980s with pencil and paper.
Since 40 years, min. 2 generations, thei’re ”
Taking climate model evaluation to the next level”.
Wasted lives.