Essay by Eric Worrall
Why did they train the AI initially using climate models? Why not ignore the models, and directly use the observations to directly train the AIs?
AI research predicts planet will warm faster than expected
By CNN 11:27am Jan 31, 2023
The study estimates that the planet could reach 1.5 degrees Celsius of warming above pre-industrial levels in a decade, and found a “substantial possibility” of global temperature rises crossing the 2 degrees threshold by mid-century, even with significant global efforts to bring down planet-warming pollution.
Data shows average global temperature has already climbed risen around 1.1 degrees to 1.2 degrees since industrialisation.
“Our results provide further evidence for high-impact climate change, over the next three decades,” noted the report, published on Monday in the journal the Proceedings of the National Academy of Sciences.
…Read more: https://www.9news.com.au/world/climate-change-artificial-intelligence-dire-forecast-for-planet-future/764893d8-2a02-4214-8980-d1e4a4b5ca87
The abstract of the study;
Data-driven predictions of the time remaining until critical global warming thresholds are reached
Edited by Michael Mann, The Pennsylvania State University, University Park, PA; received April 25, 2022; accepted December 14, 2022
January 30, 2023
120 (6) e2207183120
The United Nations Paris Agreement aims to hold global warming well below 2 °C and pursue 1.5 °C. Given the clear evidence for accelerating climate impacts, the time remaining until these global thresholds are reached is a topic of considerable interest. We use machine learning methods to make truly out-of-sample predictions of that timing, based on the spatial pattern of historical temperature observations. Our results confirm that global warming is already on the verge of crossing the 1.5 °C threshold, even if the climate forcing pathway is substantially reduced in the near-term. Our predictions also suggest that even with substantial greenhouse gas mitigation, there is still a possibility of failing to hold global warming below the 2 °C threshold.
Leveraging artificial neural networks (ANNs) trained on climate model output, we use the spatial pattern of historical temperature observations to predict the time until critical global warming thresholds are reached. Although no observations are used during the training, validation, or testing, the ANNs accurately predict the timing of historical global warming from maps of historical annual temperature. The central estimate for the 1.5 °C global warming threshold is between 2033 and 2035, including a ±1σ range of 2028 to 2039 in the Intermediate (SSP2-4.5) climate forcing scenario, consistent with previous assessments. However, our data-driven approach also suggests a substantial probability of exceeding the 2 °C threshold even in the Low (SSP1-2.6) climate forcing scenario. While there are limitations to our approach, our results suggest a higher likelihood of reaching 2 °C in the Low scenario than indicated in some previous assessments—though the possibility that 2 °C could be avoided is not ruled out. Explainable AI methods reveal that the ANNs focus on particular geographic regions to predict the time until the global threshold is reached. Our framework provides a unique, data-driven approach for quantifying the signal of climate change in historical observations and for constraining the uncertainty in climate model projections. Given the substantial existing evidence of accelerating risks to natural and human systems at 1.5 °C and 2 °C, our results provide further evidence for high-impact climate change over the next three decades.
…Read more: https://www.pnas.org/doi/full/10.1073/pnas.2207183120
To their credit they have published their code on github. “… Code is available on GitHub at https://github.com/eabarnes1010/target_temp_detection (60) and is archived on Zenodo at the following DOI: https://doi.org/10.5281/zenodo.7510551 (61).”.
My understanding is the researchers are attempting to use the AI to identify climatologically significant geographic regions, or distributions of observations, to try to filter out the noise and reduce the uncertainty of predictions.
My concern with this approach is if the data was sufficient for tuning predictions, the AIs could be trained directly on the data, the AIs could infer climate models directly from the data.
Using a simulation or model allows a large number of training runs to be packed into a short period of time, and constrains the output of the AI. But the AI is then tainted by the model, it effectively becomes an extension of the model.
I guess time will tell whether their approach has yielded increased predictive skill.