Sent to me by NWS Public Affairs
December 17, 2025
NOAA has launched a groundbreaking new suite of operational, artificial intelligence (AI)-driven global weather prediction models, marking a significant advancement in forecast speed, efficiency, and accuracy. The models will provide forecasters with faster delivery of more accurate guidance, while using a fraction of computational resources.
“NOAA’s strategic application of AI is a significant leap forward in American weather model innovation,” said Neil Jacobs, Ph.D., NOAA administrator. “These AI models reflect a new paradigm for NOAA in providing improved accuracy for large-scale weather and tropical tracks, and faster delivery of forecast products to meteorologists and the public at a lower cost through drastically reduced computational expenses.”
The new suite of AI weather models includes three distinct applications:
- AIGFS (Artificial Intelligence Global Forecast System): A weather forecast model that implements AI to deliver improved weather forecasts more quickly and efficiently (using up to 99.7% less computing resources) than its traditional counterpart.
- AIGEFS (Artificial Intelligence Global Ensemble Forecast System): An AI-based ensemble system that provides a range of probable forecast outcomes to meteorologists and decision-makers. Early results show improved performance over the traditional GEFS, extending forecast skill by an additional 18 to 24 hours.
- HGEFS (Hybrid-GEFS): A pioneering, hybrid “grand ensemble” that combines the new AI-based AIGEFS (above) with NOAA’s flagship ensemble model, the Global Ensemble Forecast System. Initial testing shows that this model, a first-of-its kind approach for an operational weather center, consistently outperforms both the AI-only and physics-only ensemble systems.

More about the new AI operational models
AIGFS — a new AI-based system that uses a variety of data sources to generate weather forecasts comparable to those produced by traditional weather prediction systems, such as GFS.
- Performance: shows improved forecast skill over the traditional GFS for many large-scale features. Notably, it demonstrates a significant reduction in tropical cyclone track errors at longer lead times.
- Efficiency: AIGFS’s most transformative feature. A single 16-day forecast uses only 0.3% of the computing resources of the operational GFS and finishes in approximately 40 minutes. This reduced latency means forecasters get critical data more quickly than they do from the traditional GFS.
- Area for future improvement: Though track forecasts are better, v1.0 shows a degradation in tropical cyclone intensity forecasts, which future versions will address.
AIGEFS — an AI-based 31-member ensemble, similar to the GEFS, that provides a range of possibilities for weather forecasters and decision-makers rather than a single forecast model solution.
- Performance: forecast skill is comparable to the operational GEFS.
- Efficiency: requires only 9% of the computing resources of the operational GEFS.
- Area for future improvement: developers continue to improve the ensemble’s ability to create a range of forecast outcomes.
HGEFS — the most innovative application in the new suite. The HGEFS is a 62-member “grand ensemble” created by combining the 31 members of the physical GEFS with the 31 members of the AI-based AIGEFS.
- Performance: by combining two different modeling systems (one physics-based, one AI-based), the HGEFS creates a larger, more robust ensemble that more effectively represents forecast uncertainty. As a result, the HGEFS consistently outperforms both the GEFS and the AIGEFS across most major verification metrics.
- A NOAA first: to our knowledge, NOAA is the first organization in the world to implement such a hybrid physical-AI ensemble system.
- Area for future improvement: NOAA continues its work to improve HGEFS’s hurricane intensity forecasts.
A NOAA and industry-wide effort
This initial model suite is an outgrowth of Project EAGLE, a joint initiative between NOAA’s National Weather Service, Oceanic and Atmospheric Research labs, the Environmental Modeling Center in NOAA’s National Centers for Environmental Prediction, and the Earth Prediction Innovation Center.
“Using Project EAGLE and the Earth Prediction Innovation Center, NOAA scientists continue to work with members of academia and private industry on more advancements in forecasting technology,” added Jacobs.
The team leveraged Google DeepMind’s GraphCast model as an initial foundation and fine-tuned the model using NOAA’s own Global Data Assimilation System analyses. This additional training with NOAA data improved the Google model’s performance, particularly when using GFS-based initial conditions.

| Climate, weather, and water affect all life on our ocean planet. NOAA’s mission is to understand and predict our changing environment, from the deep sea to outer space, and to manage and conserve America’s coastal and marine resources. See how NOAA science, services, and stewardship benefit your community: Visit noaa.gov for our latest news and features, and join us on social media. |
AI has lots of promise. But the only way to find out which promises hold water is to try. I am cautiously optimistic.
Pessimistic is a better bet.
Are you old enough to remember when banks “went onto computer”? Do you remember how every mistake and fraudulent entry was blamed onto the computer? Can you honestly say customer 8banking counters has shown any improvement over, say, 1978?
AI does not exist (at least not in a form/place you and I can access) and while LLMs are very handy tools in the hands of the literate, it is no better than an IBM mark 2 word processor in the hands of the average (functionality illiterate) citizen.
Such as the NOAA apparently. You know none of them understands Jack Shmitt about the limits of their supposed AI, because they use the acronym so often (sorry, I could not even get to the end, so irritating, like a little dog’s incessant yapping AI AI AI AIYAIYAAAI!!!) like it is some kind of talisman to ward off the evil spirits of academic rigour.
All so droll…
Old enough to remember when Social Security went to computers. Ouch…..
Sometimes Christmas comes early, Anthony…
Chaotic systems are immune to systemic analysis.
IPCC TAR Chapter Chapter 14 Page 771 pdf3
The climate system is a coupled non-linear chaotic system,
and therefore the long-term prediction of future climate states
is not possible
Mr. Case: Update from IPCC (sarc)- “With advanced AI, we can now predict long-term warming climate with, uhmmm, say, 95% certainty. Yeah, that’s the ticket….”
Yes. The most extreme events are often anthromorphised by saying the Hurricane took an unexpected change of direction. Sometimes even calling them ‘nasty’ as if they are human.
What they really mean was our computer models output was plain wrong. Hurricanes and extreme events are unpredictable by nature.
What I have found from communication with forecasters they might produce 2-4 runs of the model- which of course gives different results- and then the forecasters have a beauty contest to pick one.
When hurricanes were given exclusively female names, sudden behavioral changes were more understandable and forgivable.
Exactly. The “finest” hurricane track forecasts, lovingly crafted by the “best and brightest” seem to require “updating” every hour or so, if not more frequently.
A smart 12 year old with access to the same satpix and observations could do as well.
“Chaotic systems are immune to systemic analysis.”
Close but with a bit of a tweak….
Chaotic systems are immune to systemic analysis that yields accurate results.
Is AI really power hungry?
Power hunger is a distinctive characteristic of those who can afford the many thousands of processors needed to process the huge amount of data needed to stimulate apparent intelligence to those humanoids who have no hope of passing the Turing test themselves. This is important to remember the day when “AI takes over” and start hunting is with armed robot dogs: It is just a bunch of computers that belongs to somebody. That somebody is the legal entity liable for the ensuing damage, but replacing one or two motherboards will be the offered solution, of course.
But I suspect your question was about electricity, not the Protocols, so I can tell you a quick looksee says the Nvidia H100 uses 700 Watts, times, say, 32 in a server box, 8 boxes per 2x2x6 foot cabinet, arranged in row upon row in a data center… Well, how big do you think these server farms are?
That, my friend, brings us to 700W times many thousand, plus supporting equipment, notably cooling, which uses a really uncool amount of power.
Gigawatts per acre, does that sound hungry?
Nyomnyomnyommm…
AI is not power hungry.
AI is software and absent a computer uses no power.
The power hungry are the data center servers and high-speed processors.
You can run those and have nice foot warmers, but without software, they can only warm your toes.
This is yet another case of oversimplification to spoon feed to the masses who have been programmed to not think for themselves.
How much “climate activist” bias in forecasts do you think this will bypass ??
Weather is probably too chaotic to do much past a week or so with any accuracy.
Yes, the math of extreme sensitivity to initial conditions indicates intrinsic unpredictability which AI cannot overcome. It is in the math. Some people say there is a way around it but I have yet to see it. The time scale of the unpredictability depends on the situation. It can be ten days when things are stable on a large scale, or just an hour in the case of a storm track. In the last month the NWS evening forecast has missed my low temp by 6 degrees twice.
“Weather is probably too chaotic to do much past a week or so with any accuracy.” Actually, I would be happy with knowing 1-2- days out. Instead, after reviewing the local forecast, I tell my wife we may have rain or snow, it may or may not be windy, but in all cases wear a short-sleeved blouse and bring a coat. Actually, the locals warned me of that when we first moved here. Snow in June is as likely as 90 degree weather, only obvious which way it is tending when one looks outside in the morning. (We live north of Reno in the Pah Rah mountains.) I do love radar maps, but the wind changes are clearly not predictable, and no one knows where rain will actually land.
But I still love it!
I think it would be funny if NOAA said they were going to build “their” AI, but really made a call to Google Barf or ChatGPT.
A 50% chance of rain does not mean the probability of rain in that forecast is a coin toss.
It means 50% of coverage area could be subject to rain without declaring which section of the area that applies to.
It allows weather forecasters to be perceived as more accurate than they are.
Open Snow has a proprietary AI forecast tool that improves micro-climate forecasts significantly. I’m guessing the NOAA system is better at grid cell resolution also…?
Oh, Great. Now we will be bombarded with climate hallucinations.
Ummmm . . . is that any different from what we now have, say, in terms of three or more days-ahead weather forecasts?
In certain areas of the NE US the only remotely accurate forecasts are measured in hours, not days.
All current and future forecasts are already unreliable to a certain degree depending on how long before they are prepared
When a filter is applied to take out the forecasts that are essentially ‘same as yesterday’ then the forecasts dip below 50% in reliability.
“Same as yesterday” gives around 85% success in general terms.
In my tropical locality, for maximum temperature, around 95% accuracy within 1 SD. Who needs a supercomputer?
This means (seriously) that tossing a fair coin is more accurate. How much does that cost?
That depends on which coin you use….
USA stopped producing pennies, so there is that cost factor.
Well I hold that “A.I.” really stands for –
“Aggregated Imputations”
“will be”
LOL
Climate hallucinations have been ongoing for decades.
In my limited experience, AI is of two types. First, with often excellent results, is AI used for faster data searches, helping write software, etc., in general, activities that do not need opinions. Second, AI outputs that are at risk of being wrong because they do use opinions or beliefs, as detected by different answers to the same questions rephrased or asked for different emphasis.
I would presume that NOAA knows of this difference in AI and does not sin by using the second form. However, past experience with modern-day science has too many examples of failure because of the creep of beliefs into the system. Geoff S
AI can be of possible use in some circumstances…
… but it should never, ever be taken as 100% or the final word.
Not without, like everything else, independent verification.
I am with some of the others here, cautiously optimistic.
From the above article:
“NOAA has launched a groundbreaking new suite of operational, artificial intelligence (AI)-driven global weather prediction models, marking a significant advancement in forecast speed, efficiency, and accuracy. The models will provide forecasters with faster delivery of more accurate guidance, while using a fraction of computational resources.”
The hell you say!
Will those new AI-driven prediction models require new supercomputers tailored to AI instead of just arithmetic computations at tens of petaflops (reference the two tens-of-$millions-per-year-each WCOSS supercomputers purchased several years ago by NOAA: https://www.noaa.gov/news-release/noaa-completes-upgrade-to-weather-and-climate-supercomputer-system and https://www.gdit.com/about-gdit/press-releases/noaa-awards-general-dynamics-high-performance-computing-contract )?
Also, where is NOAA going to get the 100-200 MWh per year electricity needed to power their new AI data centers? The power output from restarting TMI Unit 1 nuclear reactor is already spoken for . . . sarc!
As for “a significant advancement in forecast speed, efficiency, and accuracy”, AFAIK AI bots have NOT demonstrate immunity to GIGO.
The high energy consumption aspect of AI is in generating the models, making use of AI is much less energy intensive. Note that the AIGFS uses less than 1% of the energy per forecast as used by the GFS forecast.
The way I look at it, is that the AI forecasts use pattern recognition as opposed to simulating in excruciating detail. This would be a more detailed version of the analogs that Joe Bastardi uses for long range forecasts.
That’s GREAT to hear . . . based on you answering these two questions:
(a) Can an AI model be “generated” in a day, or does it take more like a year or two to do such?
(b) How long before a given AI “generated model” becomes seriously outdated or needs to be completely replaced to stay competitive with other AI “generated models”?
48 hours later . . . hmmmm . . . it appears that those two questions were too tough for even an AI bot to answer on behalf of Erik. Go figure.
Don’t tell me it’s better, show me it’s better. Jumping on the AI bandwagon doesn’t mean it’s a step ahead. We’re entering a AI period where the proof will be in the pudding. Right now it’s “me too” with AI.
They are for weather forecast, not “climate” forecast. A long-range weather forecast goes out to, what, about 10 days?
It will quickly (relatively) become apparent if they are better or not.
AccuWeather actually goes out to 30 days, not that it is very accurate.
From what I have seen, AI is not ready for prime time. This is the q&a exercise I went through with Google’s Gemini. Fortunately I knew enough (barely) to know where it was giving a misleading answer. Note that I did not ask for a comparison wrt nameplate capacity, just a straight-up comparison. Judge for yourself:
I asked for a comparison of a specific solar plant to an average combined cycle natural gas plant. The reply included this:“Scale: At 754 MW, Villanueva has a nameplate capacity comparable to or larger than many utility-scale combined-cycle natural gas plants, which typically range from 500 MW to 1,000 MW.”I responded:”At 754 MW, Villanueva has a nameplate capacity comparable to or larger than many utility-scale combined-cycle natural gas plants, which typically range from 500 MW to 1,000 MW.” Is that not a false comparison since the capacity factor is significantly higher for ccng? humans will be very badly misled.Got this response:You are correct. Simply comparing the nameplate capacity of a solar plant to a natural gas plant without mentioning the capacity factor is highly misleading, and can be seen as a false equivalency. A nameplate comparison alone ignores the fundamental differences in how these technologies operate and provide power to the grid. Comparison based on annual energy generationAssuming a 754 MW capacity for both, using typical capacity factors:Solar plant (Villanueva):Capacity factor: ~25%Annual energy produced: 754 MW * 8,760 hours/year * 0.25 = ~1.65 TWhCCGT plant:Capacity factor: ~60% (a conservative mid-range estimate)Annual energy produced: 754 MW * 8,760 hours/year * 0.60 = ~3.96 TWh As this calculation shows, the natural gas plant generates more than double the amount of electricity per year, even with the same nameplate capacity. This highlights why a simple nameplate comparison is misleading.
Unfortunately, if the 31 members produce 31 different possibilities, then at least 30 are wrong. Not to worry, NOAA can produce even more incorrect forecasts –
The final outcome seems to be the even more wrong answers, delivered much faster and at greater expense. Maybe NOAA could learn from Elon Musk’s “self driving car” promises.
Well, on the plus side, I guess it can’t possibly be any worse than the human weather guessers we currently endure.
It must be nice to work in an industry where being wrong half the time is not just accepted, it’s expected. I actually enjoyed the course on meteorology I took in college. Never even occurred to me that I should pursue a career in the field.
Of course, if AI takes over, might not be such a cushy gig in the future.
I want to compare the accuracy of these AI models with the long term forecasts of Joe Bastardi. My bet is on Joe.
Great, so they are averaging the incorrect output of 3 or more models!!!!
They will of course run the old system and the new for a period of time just to confirm the strengths and weaknesses of both. Won’t they?
And of course the new system will be able to explain its reasoning when predictions and reality inevitably don’t match. Won’t it?
Don’t blow up the old system just yet and keep a few people around who know how it works. Just in case.
An AI computer model can be as performant as possible. However it will only have predictive power if the model gets the physics (and all the relevant physics) right. That means taking many other effects (the sun, clouds, emerging phenomena etc…) into account besides CO2 which plays only a very minor role if any at all. Politics have failed, it’s high time real unbiased science takes over.
I have never taken much interest in how weather forecasting models work. I have my own enthusiasms. But I don’t recall ever hearing that CO₂ is used in forecasting.
So, they are pissing away more of my money on computers that can’t forecast weather in a 72 hour time-frame with any reliability. Sweet bleedy Jebuss.
Any sufficiently advanced technology is indistinguishable from magic.
You do not need super computers to run AI software.
You will get results faster, though.
Now consider why the same companies bringing online advanced computers and data storage centers want you to link all your devices and put all your data in the “cloud.”
But, you say, the cloud is secure.
In point of fact the security protocols are to prevent external penetration.
No one ever talks about how your data is an open book to the owners of the cloud.
Internal security is lacking.
I refuse to link my devices.
I refuse to put my personal or professional data in the cloud.
I logged on to Chrome yesterday and found, without my say and without notification, my work computer had been linked to my home computer. Without my permission.
I fixed it.
A few weeks ago, Edge offered to import all of my other browser’s bookmarks, passwords, history, etc. I said no. I do not want Microsoft watching me.
Big Brother is here.
Call this a conspiracy theory if you wish but do think about it.
“Area for future improvement: NOAA continues its work to improve HGEFS’s hurricane intensity forecasts.”
Presumably they think HGEFS is under-predicting intensity. I find this amusing, since there is evidence that intensities are being over-predicted of late, based upon actual damage. Maybe HGEFS is more accurate than they desire?