From Yahoo News, via the Christian Science Monitor
A new team tries a new approach to Climate Modeling using AI and machine learning. Time will tell if a positive effort or extremely complicated exercise in curve fitting. Their goal is regional scale predictive models useful for planning. Few admit publicly that these do not exist today despite thousands of “studies” using downscaled GCM’s.
“There are some things where there are very robust results and other things where those results are not so robust,” says Gavin Schmidt, who heads NASA’s respected climate modeling program at the Goddard Institute for Space Studies. But the variances push skeptics to dismiss the whole field.
“There’s enough stuff out there that people can sort of cherry-pick to support their preconceptions,” says Dr. Hausfather. “Climate skeptics … were arguing that climate models always predict too much warming.” After studying models done in the past 50 years, Dr. Hausfather says, “it turns out they did remarkably well.”
But climate modelers acknowledge accuracy must improve in order to plot a way through the climate crisis. Now, a team of climatologists, oceanographers, and computer scientists on the East and West U.S. coasts have launched a bold race to do just that.
They have gathered some of the brightest experts from around the world to start to build a new, modern climate model. They hope to corral the vast flow of data from sensors in space, on land, and in the ocean, and enlist “machine learning,” a kind of artificial intelligence, to bring their model alive and provide new insight into what many believe is the most pressing threat facing the planet.
Their goal is accurate climate predictions that can tell local policymakers, builders, and planners what changes to expect by when, with the kind of numerical likelihood that weather forecasters now use to describe, say, a 70% chance of rain.
Tapio Schneider, a German-born climatologist at the California Institute of Technology and Jet Propulsion Laboratory in Pasadena, California, leads the effort.
“We don’t have good information for planning,” Dr. Schneider told a gathering of scientists in 2019. Models cannot tell New York City how high to build sea walls, or California how much to spend to protect its vast water infrastructure.
They simply vary too much. For example, in 2015 in Paris, 196 countries agreed there will be alarming consequences if the planet warms by 2 degrees Celsius, measured from the industrial age. But when will we get there? Of 29 leading climate models, the answer ranges from 20 to 40 more years – almost the difference of a human generation – under current levels of emissions. That range is too wide to set timetables for action, which will require sweeping new infrastructure, everything from replacing fossil fuels to switching to electric vehicles to elevating homes.
“It’s important to come up with better predictions, and come up with them fast,” Dr. Schneider says.
This is funny
And it threatens to ruffle feathers in the climate science world, especially at the established modeling centers, like Dr. Schmidt’s NASA group at Goddard. “I think they have oversold what they can do,” Dr. Schmidt says. Is a new model needed? “They would say yes. I would probably say no.”
Apparently a quite modest group.
The other distinguishing feature, Dr. Marshall notes, is those working on it. “The model is actually less important than the team of scientists that you have around it,” he contends. In fact, the 60 to 70 researchers and programmers in the CliMA group represent a veritable United Nations.
Somebody put a map on the wall at the CliMA house, a converted provost’s home at Caltech, and asked everyone to pinpoint their homes. “There were a lot of needles,” Dr. Schneider says.
Here’s the AI part
A climate model that “learns”
CliMA decided on an innovative approach, to harness machine learning. Satellite and sensor information is freely available – much of it for weather forecasters. Dr. Schneider envisions “training” their model with the last three decades of data, and then routinely feeding it the latest updates. The model itself could “learn” from the data and calibrate its performance with formulas refined by AI, even as the climate changes.
Other issues discussed are the reasons for choosing to program in Julia. To read the rest go to the full article here.
HT/Clyde Spencer
“They hope to corral the vast flow of data from sensors in space, on land, and in the ocean,”
Unfortunately the ‘data’ has been corralled and passed through the abbatoir in an effort to undo falsification of the core global warming theory.
Jason buoys released into the oceans began to show a very inconvenient cooling so they’ve grafted on an adjustment, a sort of ‘Karlization’, to get the preconceived result to match ship buckets and other more friendly 19th Century temperature metrology.
When sea level rise showed an inconvenient reversal a decade ago (supported by the global tide gauge network), reporting stopped for a couple of head-scratching years until they added on an isostatic rebound factor to reflect change in the volume of the ocean basins since the Glacial Maximum ice load was removed. Adding this volumetric factor onto a linear metric had the ridiculous result that the ‘new sea level’ now stands high and dry above the actual ocean surface. To muddle it all, they launched a satellite to measure global sea level to an accuracy +/- a few centimeters. They always talk about 3mm rise a year when tide gauges say 1.8mm.
To ‘nearly quote’ Mark Steyn’s Senate hearing testimony (on climate data integrity), “They know with 95 percent certainty what the weather will be in 2100, but have no idea what it will be like in 1950!! referencing the use of an an algorithm that changes past data annually.
Well, that’s interesting. AI doesn’t have cognitivie biases or preconceived notions. They might not get the model they were hoping for.
“I think they have oversold what they can do,” Dr. Schmidt says.
Well who said that modellers where no good at projection? That’s a text book example!
Do these people know what they’re doing? Are they mad? We know we’re all doomed with the computer models and the settled science. But if they determine exactly when we’re all doomed there’ll be absolute panic and pandemonium leading up to the dooming. This has to stop for the sake of an unknown known or there’ll known unknowns breaking out everywhere with the results.
Way back in 2007 Stainforth et al published a paper in the Philosophical Transactions of the Royal Society entitled “Confidence, uncertainty and decision-support relevance in climate predictions”. One of the other authors was Myles Allen now an IPCC author.
In their abstract they state the following-
“Complex climate models, as predictive tools for many variables and scales, cannot be meaningfully calibrated because they are simulating a never before experienced state of the system; the problem is one of extrapolation.It is therefore inapropriate to apply any of the currently available generic techniques which utilize observations to calibrate or weight models to produce forecast probabilities for the real world. To do so is misleading to the users of climate science in wider society.”
Has anything REALLY changed since then?
Nope. Just more claims that the models are right and reality is wrong.
It will be more of a shake down than a “shake up”; like all previous climate scams
Sounds more like a group of Roman Catholic cardinals arguing about how many angels can actually dance on the head of a pin….
<blockquote>Dr. Schneider envisions “training” their model with the last three decades of data, and then routinely feeding it the latest updates. The model itself could “learn” from the data and calibrate its performance with formulas refined by AI, even as the climate changes.</blockquote>
That might make for some decent weather forecasting where the prediction is only a few days but its a fit. Not even controversially. By definition. And so its no good at projecting when the atmosphere behaves in ways never seen as it changes. Any projections it makes will be completely unjustifiable.