Biases and inconsistencies in weather forecast systems found

From the INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES and the “but there’s no bias in climate forecasting, it’s perfect” department.

Scientists find inconsistencies and biases in weather forecasting system

The tiniest of natural phenomena can have a big impact on our weather, but an international team of researchers have found that the most widely used system to model meteorological conditions doesn’t account for environmental microphysics well at all scales.

The researchers published their evaluation in the latest issue of Advances in Atmospheric Sciences (

Many forecasting agencies employ the Unified Model (UM, ), a climate and weather simulation modelling system. The Met Office in the United Kingdom oversees the continual evolution and development of the UM as scientists conduct more analyses.

“This model is designed to run across spatial and time scales and is known to produce skillful predictions for large-scale weather systems,” wrote Marcus Johnson, a graduate student in the School of Meteorology at the University of Oklahoma. Johnson is first author on the paper. “However, the model has only recently begun running operationally at horizontal grid spacings of ~1.5 kilometers.”

This spacing is meant to account for the explicit treatment of cloud formation, hydrometeor growth, and precipitation: a field known as cloud microphysics.

“Microphysics are important to numerical weather models, but provide a great source of error in weather prediction,” Johnson said. “It is crucial that we identify UM microphysics shortcomings not only to alert forecast offices when interpreting model results, but also to allow the microphysics authors to improve the scheme for more accurate results.”

The UM’s microphysics scheme was originally designed and tuned for large-scale precipitation systems, according to Johnson. When his team analyzed two specific rainfall systems, they found that the scheme produced unrealistic raindrop size distributions that negatively affected the simulated storm structures.

“Microphysics schemes are not all alike, and selecting the ‘right’ scheme can lead to more accurate prediction of the weather system of interest,” Johnson said, noting that one scheme may do better predicting a large-scale snow system but wouldn’t do well with a different type of precipitation system.

Johnson and his team compared the numerical UM output to polarimetric radar observations to validate their results. They plan to continue the evaluation of different microphysics schemes and document the weaknesses and biases in each one.

“Ideally, microphysics authors will incorporate this feedback and continue to improve their scheme designs,” Johnson said.

Researchers from the Center for Analysis and Prediction of Storms and the School of Meteorology at the University of Oklahoma, Purdue University, and the Korea Meteorological Administration contributed to this analysis. Grants from the Korea Meteorological Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation supported this work.


23 thoughts on “Biases and inconsistencies in weather forecast systems found

  1. For raindrop size the concentration of nucleating particles must be considered.
    If the conditions for rain occur but their is nothing for water vapour to coalesce on then the raindrops will be bigger when they occur.

    Do models include the pollen count?
    Do models include the traffic volume?
    Do models include the effect of previous rainfall on dusty areas?


    But on the other hand, it must be noticed that a scale of 1.5 square km is far, far better than the models could manage back in the day.
    Back when the climate policy was determined and the science was settled.

  2. M Courtney
    Being a bum-boatie, I thought that the ‘Science was settled’.
    Haven’t wee been assured of that many, very many times!?

    Maybe not quite so.
    Here as elsewhere, it seems.

    I still struggle to get my head round CO2.
    Surely it HELPS crop yields?

    Oh well, if models improve, I guess the answers might be less inaccurate.


  3. Very worrying that the Met Office has the lead on this software.

    This would be the same Met Office that the BBC replaced about 15 months ago to get better forecasting and better value for money.

  4. Too often, skill is confused with luck.

    Last week I posted a AGU paper from 2014 where the researcher showed a similar weather model bifurcations on Hurricane Sandy track predictions between the Euro’s ECMWF and NOAA’s GFS models where the skill came down to the Euro model’s better implementation of cumulus cloud parameterization over the Caribbean Sea.

    Accuracy of early GFS and ECMWF Sandy (2012) track forecasts: Evidence for a dependence on cumulus parameterization

    NOAA can’t get cloud parameters correct for short-term weather model predictions, and yet they think their GCMs produce reasonable projections out to 50 years?

    • NOAA doesn’t even try to forecast out 50 years. Some researchers associated with NOAA do. NOAA’s operational seasonal forecasts are limited to 12 months out. They also do monthly and 3-month seasonal forecasts. These forecasts are limited to probability of higher or lower than average precip or temp. NWS-NCEP-CPC does these forecasts. Here is a typical example, currently the farthest look into the future (Jun-Jul-Aug 2019):

  5. they should call them “climate Parables” instead of “models”. eye makeup users do not complain about the rabbit cosmetics tests, and if they get an eye rash,, do they say “are you calling me a rabbit??” if the USFS says do not feed the wild animals or they will forget to hunt, should we give more free services to the indigent? if we shift energy finance paradigms left will the second law of thermodynamics proscribe industrial scientific freedom?

  6. The Met Office model for the UK runs hot on its medium range (3-7) day forecasts. It is very often one to two degrees warmer than outcome, and very rarely colder than outcome. I kept a spreadsheet on it for a while but its boring work!

    At the same time the Met Office started to issue press releases using the medium term forecasts to claim that “this weeken will be the warmest etc. etc.”

  7. Unless they get the spatial resolution down to 1mm The models will never be accurate enough. Getting down that low will be impossible therefore climate models will always be junk science. I am surprised that they have got down to 1.5 km

  8. Unless they get the spatial resolution down to 1mm The models will never be accurate enough. Getting down that low will be impossible therefore climate models will always be junk science. I am surprised that they have got down to 1.5 km

  9. “Ideally, microphysics authors will incorporate this feedback and continue to improve their scheme designs,” Johnson said.

    “Designs” and “models” over again.

    Meanwhile, in the real world, it’s raining outside.

    • “Meanwhile, in the real world, it’s raining outside”

      In my real world, it is Sunny and Warm with very few clouds in the sky.. The weather models, however, assure me that there are thunderstorms occurring at this very moment with heavy rain and lightning.

  10. I wonder who outside the UK uses the UM. As far as I know the NWS doesn’t, though the UM figures in its strategic planning. NWS’s primary model is the Global Forecast System (GFS). NWS uses many others; you can see the list here; note the lack of the UM. With the relevant model area, you can see the output. Hover your pointer over a model to see what the acronyms mean, how often they update, and relevant forecast area on the map. Ditto for the model areas, except no updates! Select an area and a model and you will be able to see the current output. NOTE: all of these are short-term forecast tools.

Comments are closed.