NOAA and NCAR partner on new, state-of-the-art U.S. modeling framework

From NOAA

Agreement paves way for U.S. to accelerate use of weather, climate model improvements

February 7, 2019

National Weather Service meteorologist Andrew Orrison uses weather model data to generate precipitation forecasts from NOAA's Weather Prediction Center in College Park, Maryland.

The United States is making exciting changes to how computer models will be developed in the future to support the nation’s weather and climate forecast system. NOAA and the National Center for Atmospheric Research (NCAR) have joined forces to help the nation’s weather and climate modeling scientists achieve mutual benefits through more strategic collaboration, shared resources and information.

The organizations recently signed a Memorandum of Agreement establishing a new partnership to design a common modeling infrastructure that will be transparent and easy to access and use by public and private researchers, including academia and industry. By leveraging efficiencies and synergies, reducing duplication of effort, and creating shared model repositories, future research advances will more quickly benefit the public through better weather and climate forecasts.

“Historically, different architectures for developing weather and climate models across the public and private sector created challenges for implementing the very best systems quickly,” said Neil Jacobs, Ph.D., NOAA assistant secretary of commerce for environmental observation and prediction. “This new framework streamlines the entire process and gives both researchers and forecasters the same tools across the weather enterprise to accelerate the development of forecast models.”

The agreement establishes the governance to allow NOAA and NCAR to prioritize and coordinate existing and ongoing investments. It does not replace existing governance structures or commit new funding for collaborative work.

The agreement marks a fundamental shift towards community modeling, which is a concept that will enable the entire weather enterprise to accelerate the transition of new approaches from research into operations. It also allows NOAA to transition to a Unified Forecast System (UFS), which is a community-based, coupled comprehensive weather and climate modeling system with its partners.

NCAR brings considerable expertise to the partnership, as its scientists have worked with the research community for many years to develop community weather and climate models.

“By combining NCAR’s community modeling expertise with NOAA’s excellence in real-time operational forecasting, this agreement will accelerate our ability to predict weather and climate in ways that are vital for protecting life and property,” said Antonio Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR on behalf of the National Science Foundation. “This will enable the nation to produce world-class models that are second to none, generating substantial benefits for the American taxpayer.”

Additionally, NOAA is taking steps to establish a new Earth Prediction Innovation Center, made possible by the recent reauthorization of the Weather and Research Forecasting Innovation Act of 2017. The virtual Center will enable the research community to develop new and emerging model technology that can be quickly transitioned into forecast operations at NOAA’s National Weather Service. The operational global Earth system models will be made available to the research community to support scientific and research work.

“The Earth Prediction Innovation Center, UFS, and the joint NOAA/NCAR agreement are critical elements that will position the U.S. to regain its standing as a leader on the international earth-system modeling stage. The improved modeling capability will improve our life-saving watches and warnings,” Jacobs added.

With a new stage set for community modeling, NOAA is poised to upgrade the Global Forecasting System in the months ahead with the addition of a new dynamic core, called the FV3. The FV3-based GFS will be the first NOAA application derived from community modeling and will improve forecast accuracy.

Together, NOAA and NCAR are working with partners across the weather and climate modeling enterprise to deliver the best products and infrastructure that enable forecasters to save lives and protect property nationwide.

HT/Yooper

93 thoughts on “NOAA and NCAR partner on new, state-of-the-art U.S. modeling framework

  1. The should stop focusing on delivering the “best” products and instead develop better products. The best now are really not very good.

    • Indeed, but it provides insight into the properties of individual knowers, has provided a means to model complicated scenarios involving groups of knowers and has improved our understanding of the dynamics of inquiry.
      What a load of codswalop.

    • Unless they’re starting from scratch, they’re wasting their time. The 100+ GCMs we already have are just variations on a basic theme, but the basic theme has multiple flaws. That’s why the projections of the models are in close agreement with each other as they continue to drift farther and farther and farther from reality.

      • actuallycite they are not in close agreement.
        next, they continue to track reality within tolrable limits ( +- 15%)

        name a flaw. cite the model and explain the better implementation you have.

        you wont.

        my guess is you never looked at a single line of GCM code

    • Actually it reads to me as bureaucratic talk for combining two departments into one, trimming manpower and cost.

    • Yes, the new framework has been streamlined. You no longer need to provide any input data and you always get the same output: “It’s worse than we thought. Stop burning fossil fuels. We need more socialism!”

  2. ‘The United States is making exciting changes to how computer models will be developed in the future to support the nation’s weather and climate forecast system.’

    Is this a political move to give weather forecast credibility to the ‘climate forecast system,’ whatever that is?

    Let me give you the climate forecast for the next 50 years.

    The Sahel will continue greening.

    No other climate regions will change.

    That’s it. No ‘climate forecast system’ needed.

    • Right – this is an attempt to give credibility to climate modeling that it doesn’t deserve. You only need to wait hours before you know whether your weather models were right or not. There is a difference between weather and climate or so we’ve been told.

  3. Tax payers money being used for the lefts propaganda generation “This will enable the nation to produce world-class models that are second to none, generating substantial benefits for the American taxpayer.” same as ‘If you like your health care plan, you can keep it’ and when they are done with this project does that mean the code will be public domain? NIST did a STEP project, the code is at github…

    • Modelers believe their models, even when they are not verified or shown to be way off base. Modelers make good money producing models and using techno gibberish to explain 1) why they are not predictive and 2) how they can fix it with more time and money. All models are wrong, some are useful.

  4. “Leveraging efficiencies and synergies,” tells you all you need to know about the bureaucracy behind this project.

  5. Waste more taxpayer money to make wrong predictions?
    That’s what goobermints are for.
    Wasting money.

    “Politics is the art
    of looking for trouble,
    finding it everywhere,
    diagnosing it incorrectly
    and then applying
    the wrong remedies.”

    Groucho Marx

  6. I enjoy reading about the high-quality US reference station network and how changes there compare to the overall reported US data. What is the latest on them?

    In the past, NOAA and NCAR seem to have paid little or no attention to that information–in spite of the fact that the network was established to meet the highest standards and to minimize influence of UHI on reported warming.

    • You and I do know why they don’t us that data is that data is it does not give them the answer they and their master are seeking.

  7. “…a community-based, coupled comprehensive weather and climate modeling system…” etc., etc,. etc.

    Shouldn’t NOAA be required to write in english using meaningful words?

    • There are some disturbingly ignorant remarks within this thread. Weather models are not climate models. Weather forecast models exist now that are more than adequate for providing highly-accurate 3-day forecasts. Mildly accurate 5-day forecasts. With useful trend indications out as far as 10 days.

      They utilize almost no ground station input. They’re overwhelmingly dominated by automated digital real-time space systems and multispectral sensors which are down-link net-connected so can thus pipe millions of observations into every computer-run several times a day. Then sources like digital ocean data, aircraft senors and balloon sensors, etc. A tiny component of the data (almost nothing) comes from the ground stations.

      Accurate weather models are one of humanity’s more significant accomplishments. So what’s with all the caustic remarks about them?

      • “Accurate weather models are one of humanity’s more significant accomplishments.”

        What exactly do you mean by “accurate?” And for what time period: a day, a week, a month, a year, a decade?

        Here’s an example: the solar eclipse last year. You could have booked a flight to Oregon six months in advance and been absolutely certain there was going to be an eclipse that day at that time. On the other hand, suppose you wanted to watch it from the shoreline. Could any weather forecaster have guaranteed a clear morning a week in advance so you could watch the approaching shadow coming in over the ocean?

        • The Earth’s weather is a “chaotic dynamical system”, and these are known to display “exponential divergence”, i.e., given two close initial conditions, as you evolve the solutions in time from them, the distance between those solutions diverges exponentially.
          Forty years ago in his textbook on dynamical systems, Fields Medalist V.I.Arnold showed that for even a grossly-simplified weather system (only looking at 2-D winds on a perfect sphere), it would take 100-significant-digit initial data to be able to generate a 1-significant-digit 30-day forecast (and that is as a purely mathematical problem, without any allowance for round-off error and other things that happen when you look at it as a computer problem).
          FWIW

      • Have you not been paying attention? The caustic remarks aren’t about weather models. The caustic remarks are about computer game climate models, which are being used to project doom and gloom decades out and as a bludgeon to try to institute all manner of societal change.

        This announcement wasn’t just about weather models.

        • Exactly. Most of us here love weather models and weather scientists. Weather models are generally very accurate and weather scientists are real scientists.

          But we despise climate models and climate scientists. Climate models just sciency alarmist porn, and most climate scientists are no better than witch doctors.

          • If a weather forecast model is wrong, it will be obvious pretty quickly. And if it’s a critical forecast, such has where and how strong a hurricane will be when it makes landfall, they can fine tune the forecast as more recent data comes in.

            Climate forecast models? They’re already off target just a few decades out. That’s obvious.
            Yet some want us to spend trillions of dollars and surrender freedoms based on them.
            Genuine climate scientist can use them as a tool (that need correcting) to better understand.
            Climate “political scientist” us them as is as a club to better subdue the opposition.

          • Yes, but…
            The WSM and WDM families of microphysics modules in WRF (and other models) simulate condensation/freezing/evaporation/sublimation for clouds, and gravitational settling for rain, snow, etc (but not updraft-effects; that is handled elsewhere); they exhibit a variety of misbehaviors:

            1) the settling scheme exhibits convergence (as a parcel of rain falls, it can become more concentrated, rather than diffusing. That’s on the “no-no” list for transport algorithms.

            2) the effect of (1) is so bad that the “top” of a falling rain-parcel can and do fall below the “bottom”, leading to negative rain concentrations, which the algorithms subsequently zero out. This violates the mass conservation that these schemes claim to have.

            3) the fall-velocity-interpolation algorithm in these modules can and does cause gravitational settling velocity to be sometimes negative (upwards). Physically speaking, rain does not fall upwards.

            Fix these (as I did for my last–private, not government–employer), and you get more-accurate forecasts. Fix some other things as well, and you end up with a 40% faster (as well as more accurate computer code). So much for NCAR computational expertise…

            FWIW.

      • If what you say is true, then we should be data warehousing the output out of the weather models and using this to build climate modelz.

        Get rid of the not fit for purpose surface temperature data with its subjective and repeated adjustments to the past.

        This would very quickly show if the error term in climate models converges to zero or grows indefinitely.

        Because unless the error term can be shown to converge, the climate models are worthless. In fact they are worse than useless because they provide a false certainty,

      • accurate?

        one site is telling me it will be -6/2°C with snow showers, the other is saying 0/5°C and sunny. For tuesday, or just 2 days from now. I guess I should prepare for rain…

    • To clarify, I would like to see a reliable 100-hour weather forecast. 100-day seems unrealistic. Climate “models” used to issue 100-year “projections” (wisely, they don’t call it “predictions” because of legal implications). How exactly it is done is a closely guarded secret; apparently a panel decides which model runs are politically correct enough to be included. As a result, these projections are only as good as the panel.

      • “Climate “models” used to issue 100-year “projections” (wisely, they don’t call it “predictions” because of legal implications).

        100-year “prognostications” would be more accurate.

  8. I guess that they’ll incorporate the European hurricane model in the common model repository? It seems to work better than the US one.

    “… exciting changes to how computer models will be developed in the future …” “… climate modeling scientists achieve mutual benefits through more strategic collaboration, shared resources and information.”

    “… a new partnership to design a common modeling infrastructure that will be transparent and easy to access and use by public and private researchers, including academia and industry.”

    Accessible to the public? Including skeptics?

    Note that in all this PR gibberish, there is no mention of developing new models – they are going to develop a new modeling infrastructure, within a new framework.

    And will they use Agile Development, and be CMMI Level 5?

  9. I wonder if there is room in this new initiative to include a truely innovative strategy called model validation. I know some folks in private industry who find it very useful to determine if their critical models actually resemble real world processes.

  10. If I am understanding this correctly, the idea is to eliminate the embarrassment that climate models disagree with each other by standardising on a single climate model. Of course there will still be the bigger embarrassment that the real climate refuses to follow the climate model; no matter how hard they adjust the data.

  11. More money for NOAA so they can continue to bastardized the temperature records. This new partnership doesn’t fill me with enthusiasm.

    It’s the people behind the models that I worry about. If it were up to me, I would fire them, not give them more money.

  12. I agree that having a single model vs competing ones is the most efficient way to develop a modeling regime, but it is certainly not the most effective way. Competition usually surfaces the best, a monopoly stifles innovation.

    This initiative has the opportunity to develop ‘one source of truth’ that may be wrong, but will be hard to challenge…

  13. My cynic bone is vibrating. “Community modeling” sounds too much like ‘Communist’ modeling. They are trying to make us believe the resulting models will be transparent and available to all but they will still control all the variables….like forcing. Will the variables be up for public discussion, scrutiny, and change or not?

    • There is an open source, community model out there. I sent in a query and got a reply of the order: “this is a volunteer operation, and short-handed. The code is open source here:xxxxxxxx.org. If you want to take look and propose a change you can submit it there.”

    • The existing community model is open source
      you can make any change you like, just dont expect others to accept it unless it is better.
      you wont make any changes. it requires knowledge and skill

      also, forcings are not a varibale. they are an input.
      you can change them.
      you wont

  14. “This will enable the nation to produce world-class models that are second to none…
    I would hope that this is driven by at least some bit of embarrassment since so far the Russian model is the only one that comes close to reality. While the current accuracy of the Russian model might only be due to guesswork and luck it is still showing up the other modeling teams, especially here in the U.S., as being too hidebound by their irrationality and confirmation bias.

    • You’ve done it now, you are being reported to the Mueller team for supporting Russian interference in American climate. Only true American models may be used.

    • “This will enable the nation to produce world-class models that are second to none…”

      Good, because our current world-class models aren’t second. They’re way worse than that…

  15. Will any of the new toys directly simulate thunderstorms? Will the code generate numerical hailstones, and will the associated updrafts be reported? I didn’t think so. Granted, perhaps there will be some useful advances in weather forecasting. But because the real atmosphere exhibits its most powerful heat-engine performance at small scale and high velocities, the new toys are not likely to simulate what really happens to heat with any more authority than the old ones.

  16. The biggest concern would be if you don’t play along with the consensus model you get pushed out. So will it end up as a pal review system where new improvements and innovations are ignored?

  17. I’m the fist one to hope for more reliable and accurate finely localized weather forecasts.

    However in the actual political green robbery context, “The Earth Prediction Innovation Center….” sounds like a hi-tech version of those traditional little Tyrolian wooden home weather toys.

    Expect sun when the lady is out, wind and rain when the gentleman appears.

  18. A cynical reading of the situation would say somebody is concerned he’s not getting the money he wants for his model because it’s going to rivals. This looks like a consolidation effort to squeeze them out. It’s wrapped in language about collaboration to make it sound like an efficiency move. Openness and transparency are anathema to people protecting their turf. No doubt, some of the principals involved want to do better, but there’s always an underbelly in politics.

  19. This statement has just gotta make you feel “all warm n’ comphy” all over more than anyplace else by knowing that Big Government is going to protect you from harm and save you money to boot.

    “By combining NCAR’s community modeling expertise with NOAA’s excellence in real-time operational forecasting, this agreement will accelerate our ability to predict weather and climate in ways that are vital for protecting life and property,” said Antonio Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR on behalf of the National Science Foundation. “This will enable the nation to produce world-class models that are second to none, generating substantial benefits for the American taxpayer.”

    But I’m still trying to figure out that “protecting property” thingy.

  20. Have you ever listened to a weather forecast? They don’t tell you what is going to happen. Rather they tell you there is X% chance of something happening.

    What they are really telling you is that when conditions in the past matched today, here is what happened in the past. This is not physical modelling, it is pattern recognition. You don’t even need to understand the physics of weather, all you need are accurate records and black box machine learning. Understanding the physics simply let’s you separate the inputs from the outputs in your black box.

    Where climate and weather forecasting have gone wrong is to continues to try and solve a chaotic system from first principles even though it was shown mathematically that this is well beyond the capability of any known approach. And the finest minds in physics and mathematics have tried to solve the underlying problem for centuries.

    But of course after spending billions on ever faster computers, it is hard to admit that you have blindly followed a mathematical dead end.

    • Ferd, I don’t think short term weather forecasts have gone wrong. They tell me what most likely will happen. Which direction the wind is going to be coming from, the strength of the winds, the moisture content of the air, whether there is a front coming and when. How high or low the temps will likely to be around, anticipated rain fall amounts, etc. We all understand things aren’t perfect and very local patterns can change. But for planning the next day or two it is pretty accurate.

      • Tom in Florida – February 9, 2019 at 3:10 pm

        Ferd, I don’t think short term weather forecasts have gone wrong. They tell me what most likely will happen.

        You are correct, Tom, …… and I believe that is what Ferdberple meant when he stated, to wit:

        This is not physical modelling, it is pattern recognition.

        And a prima example of said is …. a weather map of the western Atlantic showing several potential “tracts” that an incoming hurricane will likely follow.

    • Ferd, you should find out how WX modelling is really being done, you’re well wide of the mark plus these are ‘free-floating’ physical simulations using present data-sample inputs. They’re not curve-fitting to the statistical past. The fact that such weather simulations can be so accurate, for days in advance, with low-ish resolutions of the data inputs (3 to 9 km on a side) should be a wake-up call. The resolution is getting steadily better as computer power and bandwidth grow, and so is the underlying simulation’s ‘skill’ at accurately projecting data forwards by several days.

      I would readily agree that ‘climate simulations’ however are of no practical use, any more than satellites can measure actual climate-change as the scale is completely inappropriate for that and present ‘climate-data’ observation inputs are essentially completely absent for testing and developing them. Climate changes being much too slow to test in the ways humans want to test them. Without a time-machine that goes both forwards and then returns with a future log of all planetary climate changes, for say 100 million years in advance, climate simulations can not be developed and refined with any level of predictive confidence.

      In short they’re artistic, not testable.

      But in weather simulations the scale is very much exactly what you want, and the observational input data is epic in its variety, is near real-time, and is very high quality – all the things that climate model don’t have (and won’t be getting, hence, no predictive value).

      But Milankovitch Cycles (and past stats) do not drive daily weather sim forecasts, and are not even a part of WX simulations, but Milankovitch cycles and past data are key inputs to climate simulations,

      i.e. weather simulations and ‘climate simulations’ (I hesitate to call them that) are completely different beasts, even if they seem to use superficially similar approaches to deriving predictions.

      The difference is all down to the time-scale of an actual climate change will never change quickly enough to model it in a predicatively useful and testable way.

      In other words, climate modeling isn’t science.

      • WXcycles – exactly.

        With a new stage set for community modeling, NOAA is poised to upgrade the Global Forecasting System in the months ahead with the addition of a new dynamic core, called the FV3. The FV3-based GFS will be the first NOAA application derived from community modeling and will improve forecast accuracy.

        This is the Official Evaluation page for the FV3GFS
        https://www.emc.ncep.noaa.gov/users/Alicia.Bentley/fv3gfs/

        Here is a link to a pdf outline of the upcoming new FV3-based GFS forecast model
        http://www.emc.ncep.noaa.gov/users/Alicia.Bentley/fv3gfs/updates/EMC_CCB_FV3GFS_9-24-18.pdf

        Now, for all you arm-chair weather watchers that think you know everything about meteorology & demand perfection from an imperfect science with imperfect data sampling, the source code for all these models (GFS, NAM, HRRR, Hurricane, etc.) are all in the public domain for you to download, view & modify. So, get the code, correct & make it perfect, and show your results about how much smarter about this subject that you are. Till that time, you should really quit making fools of yourself.

        • There is a problem with your suggestion. How can you improve a climate model if you can not test it?

          • How can you improve a climate model if you can not test it?

            Of course you can test it, just like they test the other dynamic models. The atmosphere has been getting sampled in (reasonable) detail since the 1960’s (that’s almost 1 complete PDO/AMO oscillation period) and improving every decade but it is still part of that “imperfect data sampling” thing. You take a current model and the new model under test, initialize them with the same observation set of some previous date and run the models in parallel & see how they do with the result already known from newer observations. If the newer model modifications result in a match closer to the known observations, you can verify the modifications were helpful.

            Basic stuff…

  21. The current climate model approach, using black box machine learning to tune climate model parameters (back cast) is simply curve fitting. Any correlation found in the validation data is simply accidental. The model can be expected to diverge from future data with performance no better than chance.

    Add to this will be bias caused either by data errors or errors in the underlying physics and the models can be expected to drift in addition to simple chance.

    • Hopefully your protection software blocked it.
      That redirect/attack happens to me almost every time I come here.

    • I’ve never had Windows “break in” to what I was doing to alert me about an update, critical or not.
      I’d run your antivirus program and/or run the actual Windows Update before restarting.
      (Windows Update also has an option to show your update history.)

  22. I recall Joseph Stalin’s comment that it is “Who counts the votes” that matters.

    Just how many of the persons working on this new combination were appointed by Obama people during his two terms. The message will still be the same, or you will not get promoted.

    We need a tough boss to ask questions such as “Prove it” or that other useful word “Why”

    MJE

  23. h/t Harald Lesch:

    “Wenn Computer alles können dann können sie mich kreuzweise” –> when computers can everything then computers can kiss my back.

Comments are closed.