From the Climatic SkyNet department comes news of this upcoming climate-model-supercomputer hootenanny at the University of Illinois next week. What could go wrong if we let a supercomputer make decisions for us?
International summit seeks to harness supercomputing power for climate decision-making
Hosted and organized by NCSA and led by Professor of Atmospheric Science Kelvin Droegemeier, the summit’s long-term goal is kilometer-scale global resolution in Earth system modeling and climate projection.
National Center for Supercomputing Applications
The National Center for Supercomputing Applications (NCSA) and the Department of Climate, Meteorology & Atmospheric Sciences (CliMAS) will bring together more than 100 leading experts in climate, Earth System Modeling (ESM), computing and other sectors for the International Climate Computer Summit at the University of Illinois Urbana-Champaign from September 29 through October 2, 2024.
This summit will convene an international group of authorities from academia, government, industry and non-profit organizations to examine the practicability of co-designing a special computational system and modeling framework that supports frontier Earth system science research and climate projection using kilometer-scale global resolution. Additionally, it will address how output from global high-resolution climate projections can be used – especially locally and regionally – to make decisions in areas such as economic and personal risk, health, infrastructure, food production, biodiversity, global geopolitical stability and others.
“Numerous groups around the world are pioneering high-resolution ESMs, which when combined with artificial intelligence and machine learning are poised to transform our understanding of the global Earth system and vastly improve our ability to project future climate states. That is one side of the coin,” said Kelvin K. Droegemeier, Professor of Atmospheric Science and Special Advisor to the Chancellor for Science and Policy at the University of Illinois Urbana-Champaign. “The other side, which the summit is addressing, involves the computational environment required to achieve this transformation. It does not exist, even with AI, but we believe it can be created if the international climate research community joins forces in ways it never has before. Such an effort would not replace existing research strategies but rather add value to them, also opening new vistas of educational opportunity and providing practitioners and stakeholders with the information they need for making decisions across all sectors of society.”
The goal of the climate summit is to assemble the international community toward achieving a transformative milestone: to provide information about Earth’s climate system globally, with the detail of regional weather forecast models through the use of sophisticated ESMs at global resolutions of a few kilometers integrated with artificial intelligence and machine learning. Credible, detailed information at this scale can empower timely climate decision-making at local and regional scales. Achieving this important goal requires computational capabilities and software frameworks beyond those currently available commercially.
NCSA’s powerful supercomputing resources and expert team are well-equipped to tackle the challenge of climate change. For example, NCSA’s Blue Waters and iForge supercomputers hosted at the U. of I. helped researchers model volcanic activity with real-time data to provide daily forecasting, and data from the Extreme Science and Engineering Discovery Environment (XSEDE), a powerful collection of integrated digital resources and services including supercomputers, visualization and storage systems, helped bring the Amazon’s “beating heart” weather patterns to life for researchers. NCSA is ready to be a powerful asset in the fight against climate change.
“NCSA and the University of Illinois have a long history of collaboration with climate researchers on solving the most challenging environmental questions,” NCSA Director Bill Gropp said. “Climate modeling of this scale will require innovative high-performance computing resources designed for these problems. With a tradition of deploying and helping scientists use the latest technologies, we are proud to sponsor this summit and demonstrate our commitment to helping decision-makers address and plan for climate uncertainty.”
The Summit will bring 100+ invited participants from around the world to Champaign-Urbana from sectors including government, climate research, higher education, computer science, technology and industry. The public is invited to attend the summit virtually at no charge, and all plenary sessions will be live-streamed globally during the summit. Registration for virtual attendance at the summit is available at this link.
The International Climate Computer Summit is sponsored by the National Science Foundation and organized by the National Center for Supercomputing Applications, the Department of Climate, Meteorology & Atmospheric Sciences at the University of Illinois, the NSF National Center for Atmospheric Research, the University of Maryland, Berkeley Lab, the Max Planck Institute for Meteorology, the University of Utah and the Pacific Northwest National Laboratory.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
14.2.2.2 Balancing the need for finer scales and the need for ensembles
In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.
Is there a particular reason why don’t predict weather?
Maybe this will help. (Even the IPCC is skeptical.)
IPCC – Intergovernmental Panel on Climate Change
“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.
“What could go wrong if we let a supercomputer make decisions for us?”
They aren’t proposing anywhere that supercomputers would make decisions. It is a conference to find ways to get kilometre scale resolution. They say of the conference
“Additionally, it will address how output from global high-resolution climate projections can be used – especially locally and regionally – to make decisions in areas such as economic and personal risk, health, infrastructure, food production, biodiversity, global geopolitical stability and others.”
“providing practitioners and stakeholders with the information they need for making decisions across all sectors of society”
People make the decisions. They just would have finer resolution data.
I must admit that I see a lot of upside in having kilometre scale resolution of local climatic conditions.
PROVIDED that the probity and provenance of the actual recorded temps etc were reliable
(that is – no “homogenising”, “adjusting”, “infilling”, “averaging” or other such shenanigans that just produce ‘constructs’).
Maybe this way we could properly understand the behaviors of ALL the hundreds / thousands of unique regional climates that operate all around this planet, and ditch the “global average” bullshit that climate grifters have glommed onto in order to prosecute their political ideologies and agendas.
1 kilometer scale resolution is insufficient.
Even a 1 meter scale is insufficient.
Multiple coupled non-linear dynamic chaotic energy systems can not be modeled to make forecasts, predictions, projections, whatever that are worth the cost..
In fact chaos can not be simulated at all.
This whole insanity is comes from quietly assuming CO2 is the butterfly.
“People make the decisions. “
Yes, people program the models to get the result they want.
Is that what you were trying to say ?
This will just let them run more combination of their many “parameters”, until they get something they “like”.
“providing practitioners and stakeholders with the information they..
…. NEED….
…. to further their aim of one-world socialist totalitarian governance.
How will they know whether their “projections” are correct, until/unless they happen? And what happens when one, two…twenty “projections” don’t come to pass? Will they pack it in? Or keep failing?
Stokes won’t answer.
That’s easy: print trillions more money and flush it down millions of toilets.
“It is a conference to find ways to get kilometre scale resolution.”
Which raises the question: Over what time frame would those results be valid?
One other thought comes to mind, at kilometer scale resolution, land use patterns will likely make more of a difference than greenhouse gas concentration.
The most useful product of the conference would be to get a better handle on how to parameterize thunderstorms and other related weather.
I think the push for finer resolution is indeed to model (not parameterize) first hurricanes and then thunderstorms, cold fronts etc.
Until they get rid of the under-LYING features of fake CO2 warming, and the hindcasting to FAKE surface data..
… models will remain totally meaningless and irrelevant to the Earth’s system.
The results of the modeling would then be used to develop better ways to parameterize hurricanes, thunderstorms, cold fronts, etc to be used in the global circulation models.
The. NWS gridded forecasts have about 1 mi resolution, which is actually a bit too coarse for my area (6,000′ from the coast). OTOH, the gridded forecasts for the Las Vegas area typically call for the Strip to be a few degrees warmer than the surrounding city and that shows up when driving on I-15 through Las Vegas.
More details nice but you still are stuck with a “coupled non-linear chaotic system” as quoted elsewhere in this topic.
What a joke, this is never going to happen.
Computer says no or this one
Is how it will work. No questioning, no initiative and no common sense just accept what the computer says.
If you think otherwise then watch the UK Post Office Inquiry on YouTube, particularly the top management and politician’s evidence along with Gareth Jenkins. It may give you food for thought
No, that is nonsense. This is simply a proposal to upgrade the resolution on existing Earth modelling programs. They don’t have any such functionality.
I suggest you take my advice and watch the Post Office Horizon Inquiry. It’s an object lesson in how people in power believe the output from a computer over all common sense.
Alan Bates received a knighthood today for spending 20+ years fighting the PO, politicians and Civil Service.
The climatistas fear Trump being a dictator for a day, but they look forward to being dictated to by a super computer.
I wonder if Nick has heard of China’s social credit score and their totalitarian surveillance state.
Sounds like a great way to find out each person’s carbon footprint! /s
“People make the decisions.”
Not if super computers are giving them “the information they need for making decisions across all sectors of society””
“They” will select whichever results they please and hide the rest so “they” can justify their decisions with impunity.
Your points are accurate.
However, there is a trap in that line of progress.
The output of these computer projections is NOT data!! It’s not even a valid model of the climate.
Before worrying about scale resolution, how about a conference on how to reduce the enormous uncertainty of model projections? Or, maybe a conference on how to properly validate the model’s ability to accurately reflect the actual Earth’s climate.
How to get better predesigned junk answers quicker.
Reality…. not important.
High-resolution weather projections will be useful. Climate projections, not so much. I can predict that the climate where I live will be indistinguishable from the current situation in 25, 50, 75 years. What could change this: (1) all the Washington State volcanoes could simultaneously go boom, or (2) the Cascadia Subduction Zone could destroy everything west of the Cascade Crest, or (3) 1 +2.
Or the lunatics at the Dept of State, Dept. of Defense, and current occupants of the White house get us into a nuclear conflagration with Russia … and the Puget Sound, (loaded with strategic targets) is nuked into oblivion. That would be … really bad.
So where does accuracy enter the picture? GIGO rules climate ‘computing’
More climate propaganda junk… more quickly
Climate model uncertainty grows at each and every iteration step, a fact that all the FUD Stokes generates cannot change.
How many actual measurements and going how far back and covering how much area is there to go into the “GI” part of GIGO?
(Will treemometers be included?)
Maybe Mann is a friend of Treebeard?
They clearly don’t understand the difference between precision and accuracy.
Such a basic concept yet not grasped
Our lab scale could give a reading out to 10 or so decimal places of a gram. Yet our MDL (Method Detection Limit) for a certain test result reportable to the OhioEPA was only 4.5 milligrams/Liter.
Any lower result from our lab couldn’t be trusted to be accurate.
One km may capture almost all thunderstorms where whole thunderstorms could have previously been missed. As I understand it, now they have to account for the following:
Finite representation of computers – computers have limited numbers.Machine Epsilon – relative error due to rounding in floating point calculationsMachine Epsilon residual – the rounding error float which if added to 1 returns 1Parameterization – methods to try to adjust limits in computer calculations, especially models1941 Kolmogorov Microscale Theory on fluid turbulenceNavier-Stokes Equation must be solved (the Clay Mathematics Institute offers $1,000,000 for solving it).Unknown climate variables
Made it readable. Spirited from a Dr. Christopher Essex online lecture. Hope I got it right.
And, the uncertainty of model outputs grows at each iteration, all the GPUs in the world can’t alter this.
Ok, get the developers of the current not fit for purpose climate models together so they can develop a higher resolution not fit for purpose climate model and program, based on the erroneous theory that CO2 drives the climate, and have that program tell us how to live, work, eat and play. Sounds like the WEF’s, U.N.’s and President Xi’s dream and rational people’s nightmare. Count me out!
More video games with bigger computers. That’s the ticket. That’s just what we need.
/sarc.
Kilometer scale modeling would not be able to detect or model the “Butterfly Effect.” The area of a butterfly is 1/25,000,000,000 of a square kilometer. To model the earth with butterfly-size cells would require 12,750,000,000,000,000,000 cells. At that point you can start playing around with the butterfly effect and chaos theory which must be accounted for in weather and in climate prediction.
When you get to that level of computing power (it’s not impossible) remember that the Fourth Working Group of the IPCC proclaimed that it’s models could not and should not be used for prediction far into the future. Because of chaos theory prediction would be impossible.
Return to GO. Do not collect $200,000,000,000,000.
Your scale factor is off, not the size of the butterfly.
It needs to be atomic/molecular scale and the size of the computer would be the size of the universe.
Then there is the Heisenberg uncertainty principle.
Hmmm…. a thought for the middle of the night. By measuring weather conditions, are we changing the climate?
Good points. And, yes – if we measure weather conditions at the atomic level you suggest, we necessarily must change weather, and consequently, perhaps, climate. Moreover, the googolplex of universes created by all those measurements will have their own problems with climate. Will that be our responsibility?
Sky Net would conclude that all CO2 producing units must be terminated.
(Someone had to say it.)
“This unit … must … die.”
But what if the AI computers said that there is NO ‘climate emergency’ & more CO2 was a good thing; would the CAGW / Net Zero mobs accept that ?
or
Would they complain, “It’s a fraud”, & “You have stolen our dreams of control” then start a riot to smash the machines & overturn the result ???
Anyway, we know the answer to the ultimate question is ’42’ (with 97% certainty)
Thanks for all the fish.
*this mission is too important for me to allow you to jeopardise it*
https://duckduckgo.com/?q=2001+a+space+odyssey+i+can%27t+let+you+jeopodise+this+mission+dave&t=h_&iax=videos&ia=videos&iai=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DARJ8cAGm6JE
We have been told for decades what the weather will be like in 2100. Why all this re-work now?
“The science is settled.”
So why are they dedicating ever more computing resources (and $millions) to further explore the mechanisms of climate? They know what the real control knob is, CO2. End of discussion.
No matter how many times it takes saying, a human designed, programmed and operated logic machine of any potential capability is still a human designed, programmed and operated logic machine. It cannot and will never be able to make credible predictions of the future since no human has managed that at any useful or desirable scale or reliability ever. Even Watts had to experiment with and modify his ideas for greater efficiency in engine design.
Converting our processes into 0s and 1s makes for accurate record keeping and logic provided the input data is accurate and that is the vital piece of the jigsaw that will always be missing until an event unfolds exactly as foretold in the future. Humans cannot see the future as historical charlatans have shown us since day one. We can make lucky guesses but most successes come from sheer hard work in the present.
We can imagine the future and that is what separated humans from the rest of the animal kingdom.
That we can imagine does not mean we can predict, only that we can conjure possibilities.
Meanwhile, the atmosphere is operating at finer than mm resolution as the authentic model of its own performance as an emitter, reflector, energy converter, energy transport medium, etc. We are watching from the geostationary satellites, producing 2 km visualizations for the full disk of the planet, and 1 km for regions such as CONUS. We can already tell from observing its performance that incremental non-condensing GHGs are not capable of driving the climate outcome – certainly not to any harmful result. We can also see plainly that attribution of ANY of the reported surface warming to the incremental radiative effect of rising GHG concentrations has been unsound all along.
Higher resolution modeling? OK, go for it, but we can already see that there is no “climate” problem to begin with, arising from GHGs such as CO2, CH4, N2O.
Watch from space. Please be sure to read the full text description at this time-lapse video of the GOES East full disk Band 16 images from last year.
https://youtu.be/Yarzo13_TSE
Here is the description at that video, for those who may be interested.
########
Are emissions of non-condensing greenhouse gases (GHGs) such as CO2 a risk to the climate system? Watch from space to see whether the concept of a radiative heat “trap” explains the observed result.
This time-lapse video captures 7 recent days of hourly images generated by NOAA from high resolution full-disk radiance data from the GOES East geostationary satellite for Band 16.
NOAA calls this the CO2 Longwave IR (infrared) band. It is centered at a wavelength of 13.3 microns, at the edge of the “atmospheric window” part of the infrared spectrum. The “brightness temperature” color scale for visualization is such that the radiance at 50C on the scale (red) is 13 times the radiance at -90C (white.) It is in this narrow band of wavelengths that a significant part of the claimed static warming effect of incremental CO2 concentrations is computed.
So what? The emitter output is obviously not that of a passive radiative insulating layer. The motion of the atmosphere is a response to absorbed energy and to the rotation of the planet. These dynamics change everything about where to expect the energy involved in the static warming effect (i.e. the GHG “forcing”) experienced at the surface to end up. The formation and dissipation of clouds dominates the overall result, and the overturning circulations at local, regional, and global scale produce highly variable emitter outputs over time and location. It is all strongly self-regulating as the motion delivers just enough absorbed energy from the surface to high altitude and from the tropics to the poles to be more easily emitted to space as longwave radiation.
The atmosphere is the authentic model of its own performance as an emitter and as a controller of longwave emission from the surface. What do we see and learn from watching it perform? The visualization helps us grasp that heat energy cannot be made to accumulate on land and in the oceans to harmful effect by what increasing concentrations of non-condensing GHGs do in the atmosphere. And for whatever warming has been experienced and measured down here, the minor effect of increasing GHGs cannot be isolated for reliable attribution.
So as I see it, is there risk of harmful warming from GHGs? No. We can see from space that it doesn’t work that way.
[Edit 8-23-2023 The color scale NOAA uses to convert “brightness temperature” for the visualization is given here.
Brightness temperature itself is computed by NOAA from the radiance data, using an equation and coefficients from the user manual for the imager. The radiance at 50C “brightness temperature” (red) is 13 times the radiance at -90C (white.) Radiance is the strength of the flow of energy being emitted from the planet and sensed at the satellite in this band.]
#######
What is really amazing is that the present AI nonsense is clearly rooted in climate model type nonsense: which is to say – given much more credit than it actually deserves. Computing used to be about GIGO: Garbage In, Garbage Out i.e. the computer would only output garbage if given bad input.
Climate models – they can output anything at any time for any reason. You want hot? cold? flood? drought? voila!
LLMs are the same way: beyond the scraping of the internet, LLMs can still f*** it up because they will dump out anything at any time for any reason. Starting with a lawyer suing Avianca where the LLM literally pulled a bunch of legal references out of its ass, to the modern LLMs which will create a whole library of past work for an author who did not create them, to dumping out code which is incorrect – LLMs appear to be nothing more than a really energy intensive bad employee: one you have to double check constantly to see that they don’t screw the pooch.
Fortunately, a judge just ruled that AI companies are liable for bad product…now if we could only get the same ruling for GCMS.
Kaching. Say hallelujah and send more money!
The use of these computers, regardless of how large and massively parallel, is still just a parameterized projection. To truly model the climate of the planet requires much smaller resolution cells, coupled with non-linear differential equations, as part of a fluid dynamic analysis. Today’s computers are simply not capable of doing so. They can make pretty pictures & movies, but these are not real models of the climate.
Perhaps when quantum computers are large enough and fast enough, say in a 100-150 years, we might be able to really model the climate in a time frame that is useful. Quantum computers can execute differential equations directly, but these machines are only at the research stage, not ready for use as tools.
Humans turning over important decision-making to a computer reminds me of the adage about IT purchasing decisions of the 1960s through the mid-80s: no one gets fired for buying IBM. It was the low-risk decision for everyone, except maybe a few in the C-suites.
So, I easily see the decision-making about using super-computers for climate-change decisions to devolve almost immediately to the computers, especially when almost no person of the involved NGOs, etc., understands computers, much less super-computers. No one will suffer consequences for a bad decision to turn decision-making over to computers. After all, they’re talking super-computers, and these computers must be good because they’re… super!
Sounds like one more huge waste of money, stop that.
Urbana-Champaign (or Champaign-Urbana), eh? Where HAL-9000 was built. An AI machine, tasked to conduct a space mission where the lives of as number of astronauts were at stake. Whose builders didn’t actually understand its workings, according to Clarke’s amazingly prophetic novel 2001: A Space Odyssey (written before the film was made, but only published afterward). No one was aware of the fact that HAL had misunderstood his instructions, and misinterpreted the actions of the astronauts as jeopardizing the mission – leading HAL to kill all but one of them.
Rely on computer models we don’t understand to formulate policy affecting all life on Earth? What could possibly go wrong?