Essay by Eric Worrall
According to advocates, AI can generate better flood predictions than physics and geography. But is this just more magic box thinking?
Should AI’s Role To Cut Greenhouse Gas Emissions Be Greater?
22 hours ago Carolyn Fortuna
Scientists warn that heat waves, floods, droughts, and severe storms will get far worse in the decades ahead unless we change course. Looking ahead, could AI’s role in developing new climate models save us many gigatons of carbon emissions?
…
AI’s role in the struggle against climate change is already prominent and is also controversial. While it seems evident that AI can serve in the pursuit of a greener future, checks and balances that ensure fairness and equity must be implemented.
…
For decades, scientists looked at climate prediction models based largely on the rules of physics and chemistry to forecast weather patterns. Now hybrid-based models consider machine learning and other generative AI tools which help climate scientists create even more accurate and precise systems. For example, doctoral students who are working with officials from the Tennessee Valley Authority to provide a more accurate hybrid-based flood prediction system than the one they are using that is based solely on physics.
…
Read more: https://cleantechnica.com/2023/12/31/should-ais-role-to-cut-greenhouse-gas-emissions-be-greater/
The article also provides a link to their ICEF submission;
ARTIFICIAL INTELLIGENCE FOR CLIMATE CHANGE MITIGATION ROADMAP
Innovation for Cool Earth Forum
DRAFT FOR COMMENT
October 2023…
PREFACE
Artificial intelligence (AI) is a hot topic. One business leader recently called it “the defining
technology of our time.” Another said “It is difficult to think of a major industry that AI will not
transform.”Meanwhile countries around the world are struggling to respond to the challenge of climate change. Despite encouraging developments, including steep declines in the price of renewable power, global emissions of greenhouse gases keep rising. Scientists warn that heat waves, floods, droughts and severe storms will get far worse in the decades ahead unless we change course.
Can AI help cut emissions of greenhouse gases? This roadmap explores that question. Our goal is to provide a useful resource for experts and non-experts alike. In Part I of the roadmap, we provide
brief introductions to both AI and climate change. In Part II, we explore six areas in which AI is
helping respond to climate change and could do much more. (These are greenhouse gas emissions monitoring, the power grid, manufacturing, materials innovation, the food system and road transport.) In Part III, we explore cross-cutting barriers, risks and policies. We finish with findings and recommendations.The relationship between AI and climate change is a big topic. Among the questions we do not
explore in this roadmap are (1) how AI could contribute to climate change adaptation (an important
area for work and study) and (2) whether the broad societal forces that AI may unleash are more
likely to help or hinder the response to climate change (a difficult question in light of the many
uncertainties with respect to AI’s impacts in the years ahead). Instead, we aim to provide a resource that will make favorable outcomes more likely, pointing toward ways in which AI can contribute to climate solutions.This roadmap builds on the body of literature produced annually in connection with the ICEF
conference. Previous roadmaps have addressed the following topics:Read more: ICF AI Roadmap
- Low-Carbon Ammonia (2022)
- Blue Carbon (2022)
- Carbon Mineralization (2021)
- Biomass Carbon Removal and Storage (BiCRS) (2020)
- Industrial Heat Decarbonization (2019)
- Direct Air Capture (2018)
- Carbon Dioxide Utilization (2017 and 2016)
- Energy Storage (2017)
- Zero Energy Buildings (2016)
- Solar and Storage (2015)
The reference to checks and balances is entertaining. From the full report:
… Bias-related risks when using AI for climate mitigation include using AI models that prioritize certain groups due to historic data availability. For example, data for wealthier nations and neighborhoods are often better than data for poorer ones. Privacy-related risks include unauthorized data leaks to third parties, personal identification and even surveillance. Security-related risks are especially acute if AI is used for real-time decision-making (for example in operating factories or the electric grid). …
Read more: ICF AI Roadmap
Chapter 10 Risks explains these bias-related risks in more detail. The risk includes cultural biases, programmer biases and data biases (e.g. putting more solar panels into an area already rich with solar panels, because it is obviously a good place for solar panels – while ignoring other potentially useful locations).
I suspect this point about checks and balances has been added because some of their preliminary AI model runs produced some embarrassing recommendations.
For example if you were to feed an artificial intelligence RCP8.5 climate scenario assumptions, then ask the AI to maximise economic production in a global hothouse scenario, the AI might recommend ignoring renewables and maximising fossil fuel energy production.
If you then program the AI to give more priority to the alleged climate harms to nations like Bangladesh and Arabia, the AI might recommend ignoring renewables, but subsidising air conditioners and building sea walls and flood levees, rather than the politically acceptable recommendation of more climate action.
It would obviously be unacceptable for the AI to produce a product which embarrasses its political backers, so there would likely be a strong temptation for AI researchers to add “checks and balances” to their system until it produces the right answer. A lot like the policymaker review of IPCC reports, or allegedly dubious adjustments of temperature records, except AI scientists who yield to this temptation to prioritise political correctness would be more likely to try to restrain their systems by tweaking the software rather than editing the final product.
Whether this “checks and balances” constrained AI product has any practical value is a different question.
Artificial Ignorance, a solution looking for a problem.
A toy that the ignorant take seriously.
If they don’t bias the AI, then the weather predictions for, say, 10 days out should become better. The reason for this is that if a particular set of pressure readings and temperatures has happened before, then the progress of the weather should be pretty similar to what happened before. All we’re doing here is pattern-matching, and that is where the Big Data aspect of AI does things pretty well.
This will not take into account any changes over the data record, such as CO2 increase (which won’t make any perceptible difference anyway) or changes in irrigation, crop type, or city growth (which will make a difference).
What this will not do is enable a longer-term prediction of weather, as in years, decades, or centuries. Thus if they claim that it can predict climate, they’re extracting the urine.
I have a bit of fun looking at the weather forecasts 3 days out and seeing the predicted weather changing as that reduces to 2 days, tomorrow, then today, where the temperature predicted changes by maybe 2 or more degrees and it changes between sunny and rainy. With the 10-day forecasts,it’s a coin-toss. With current technology, we just can’t predict that far with any accuracy. This is a predictable limitation of the climate/weather models because of the large cell-size used, which means that we can’t run on first principles and real physics, but need to estimate averages. The computing power needed to run a cell-size of 10 metres or so is well beyond current computers, and since I’ve seen 10m clouds you’d need to use that cell-size to describe conditions close-enough to use physics to describe the evolution, and you’d also need to run the next iteration before the wind moved the air 10 m, and you’d also need a very high precision to describe the small changes over that time and space interval. You also need a model of the ground and its crops at that resolution, and ocean currents too. You can see that it is actually possible in theory, just not attainable in practice.
Thus given the impracticality of getting a reliable model of the weather, doing a pattern-match on previous configurations looks to me to be a good idea, and may give better predictions. Knowing the weather 10 days out to reasonable accuracy could improve the decisions of farmers, and thus be worth it. We’ll just need to check the predictions against reality to see how accurate it ends up. It is after all what human weather-forecasters do, which is to take the current configuration and progress it with estimates of how it will develop based on their knowledge of what happened before. Some people do it better than others, through having a greater depth of knowledge.
Not going to happen any time in the foreseeable future. Even with quantum computers providing the processing power, there are not measurements at the precision and quantity to have sufficient data available.
Even with km sized cells, where are the wind speed, humidity, pressure, insolation, etc. data throughout each of those cells going to come from? From averaging!
With coupled, nonlinear equations, real time data is needed with sufficient detail to track changes. Even then, looking into the future is like looking into a crystal ball, even for an AI. Until the ability to control weather is available, climate will change as it will. The butterfly effect is real, even if oversold.
AI, more particularly Large Language Models (LLM), can only produce text — they cannot REASON. That is, they cannot THINK.
“Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.” wiki
A recent study reported in the NY Times found that:
“On a task that required reasoning based on evidence, however, ChatGPT was not helpful at all. In this group, volunteers were asked to advise a corporation that had been invented for the study. They needed to interpret data from spreadsheets and relate it to mock transcripts of interviews with executives. … Here, ChatGPT lulled employees into trusting it too much. Unaided humans had the correct answer 85 percent of the time. People who used ChatGPT without training scored just over 70 percent. Those who had been trained did even worse, getting the answer only 60 percent of the time.”
Generative AI is good at pattern recognition — but pattern recognition is a superpower of the human mind (maybe of all living minds including those of animals).
This would seem to make GenAI good at weather or storm prediction — recognizing the patterns in the NOW and predicting the patterns expected tomorrow or later in the week.
But if reasoning is required — and it is, as Joe Bastardi — GenAI will not be as good as a well-trained human. Faster maybe, but not better.
“Should AI’s Role To Cut Greenhouse Gas Emissions Be Greater?”
If AI’s goal is to cut greenhouse gas emissions then the “I” in AI must stand for incompetence, idiocy, irrelevance or something similar. It certainly has nothing to do with intelligence.
The picture at the top of the post.
Maybe it should have said, “Take us to the AI Gorical.”?
Same input to produce the same output.
Given that every major university seems to fund a department relying so heavily on the development of climate models as daily work and the inspiration for papers to publish – what would happen the year after anyone, anywhere, anyhow made a working model good enough to agree “it’s good enough”?
Success would be a disaster to hundreds of PhDs and PhD candidates.
Creating an AI that produces “the right results” requires programmers that understand how to arrive at “the right results” independent of what the data says.
which is why they could never allow it to be open source.